Top
Best
New

Posted by SomaticPirate 9/12/2025

Float Exposed(float.exposed)
417 points | 114 comments
vismit2000 9/12/2025|
This remains one of the best explanations on the topic: https://fabiensanglard.net/floating_point_visually_explained... Saw this when I just started using HN and such posts only inspired me to stick to it: https://news.ycombinator.com/item?id=29368529
cdavid 9/12/2025||
Maybe I am too mathematically enclined, but this was not easy to understand.

The ELI5 explanation of floating point: they approximately give you the same accuracy (in terms of bits) independently of the scale. Whether your number if much below 1, around 1, or much above 1, you can expect to have as much precision in the leading bits.

This is the key property, but internalizing it is difficult.

ekelsen 9/12/2025|||
I like "between each power of 2, there are the same number of numbers."

So between 1/2 and 1 there are the same number of numbers as between 1024 and 2048. If you have 1024 numbers between each power of 2, then each interval is 1/2048 in the first case and 1 in the second case.

I reality there are usually:

bfloat16: 128 numbers between each power of 2

float16: 1024 numbers between each power of 2

float32: 2*23 numbers (~8 million) between each power of 2

float64: 2*52 numbers (~4.5 quadrillion) between each power of 2

hnuser123456 9/12/2025|||
Or, say, you can write any number you want, but it has to be a whole number from 0 to 9, and you can only make the number bigger or smaller by moving the decimal point, and you can only move the decimal point up to 10 spaces. And you can add or remove a negative sign in front.
larodi 9/12/2025|||
and makes so much sense in the context of this recent blog from TM's researchers

https://news.ycombinator.com/item?id=45200925

nesk_ 9/12/2025|||
I've never seen this topic so well explained, thank you for sharing!
Etherlord87 9/12/2025||
From just scanning through the article quickly, I don't see there the mantissa similarly easily explained, but there is a very intuitive way to think of it. Because the mantissa (like everything else) is encoded in binary, the first explicit (because there's implicit 1. at the beginning) digit of it means either 0/2 or 1/2 (just like in decimal the first digit after the dot means either 0/10 or 1/10 or 2/10...), the next digit is (0/2² = 0/4) or 1/4, third digit is 0/8 or 1/8 etc. You can visualize this by starting at the beginning of the "window", and then you divide the window into 2 halves: now the first digit of the mantissa tells you if you stay at the beginning of the first half, or move to the beginning of the 2nd half. Now whatever half you picked, you divide it into 2 halves again, and use the next bit of mantissa to tell you if you should advance to the next half. So you just keep subdividing, and so the more bits in the mantissa you have, the more you can subdivide the window, and if the exponent (after applying bias) is equal exactly the number of explicit bits in the mantissa, the smallest subdivision cell has length equal exactly 1. Incrementing exponent by 1 will now double the window size, and without additional subdivision each cell has length equal exactly 2 meaning each next float number now increments by 2.

(keep in mind there is a subnormal range where there's implicit 0. at the beginning of the mantissa instead)

To reiterate, increasing the exponent by 1 doubles the window size, so the exponent describes how many times the window size was doubled while the number of bits of mantissa describes how many times you can do the reverse and "half" it, hence the exponent to mantissa bits relation.

jjcob 9/12/2025||
One problem I was struggling with: What's the shortest, unambiguous decimal representation of a float?

For example, if you use single precision floats, then you need up to 9 digits of decimal precision to uniquely identify a float. So you would need to use a printf pattern like %.9g to print it. But then 0.1 would be output as 0.100000001, which is ugly. So a common approach is to round to 6 decimal digits: If you use %.6g, you are guaranteed that any decimal input up to 6 significant digits will be printed just like you stored it.

But you would no longer be round-trip safe when the number is the result of a calculation. This is important when you do exact comparisons between floats (eg. to check if data has changed).

So one idea I had was to try printing the float with 6 digits, then scanning it and seeing if it resulted in the same binary representation. If not, try using 7 digits, and so on, up to 9 digits. Then I would have the shortest decimal representation of a float.

This is my algorithm:

    int out_length;
    char buffer[32];
    for (int prec = 6; prec<=9; prec++) {
        out_length = sprintf(buffer, "%.*g", prec, floatValue);
        if (prec == 9) {
            break;
        }
        float checked_number;
        sscanf(buffer, "%g", &checked_number);
        if (checked_number == floatValue) {
            break;
        }
    }
I wonder if there is a more efficient way to determine that shortest representation rather than running printf/scanf in a loop?
lifthrasiir 9/12/2025||
Your problem is practically important as it can be considered as the "canonical" string representation of given floating point number (after adding the closestness requirement). There are many efficient algorithms as a result, selected algorithms include Dragon4, Grisu3, Ryu and Dragonbox. See also Google's double-conversion library which implements first two of them.
jjcob 9/12/2025|||
Thank you for pointing me to all the prior work on this!

I am surprised how complex the issue seems to be. I assumed there might be an elegant solution, but the problem seems to be a lot harder than I thought.

pklausler 9/12/2025||
See also the code in LLVM Flang for binary/decimal conversions. Minimal-digit decimals don’t come up in Fortran as often as other languages, but I use them when allowed by the output format. Interestingly, the last digit of a minimal decimal conversion is often irrelevant, so long as it exists.
unnah 9/12/2025|||
Since such algorithms were developed in the 1990's, nowadays you can expect your language's standard library to use them for float-to-decimal and decimal-to-float conversions. So all you need to do in code is to print the float without any special formatting instructions, and you'll get the shortest unique decimal representation.
lifthrasiir 9/12/2025||
Except that C specifies that floating point numbers should be printed in a fixed precision (6 decimal digits) when no precision is given. Internally they do use some sort of float-to-decimal algorithms [1], but you can't get the shortest representation out of them.

[1] Some (e.g. Windows CRT) do use the shortest representation as a basis, in which case you can actually extract it with large enough precision (where all subsequent digits will be zeros). But many libcs print the exact representation instead (e.g. 3.140000000000000124344978758017532527446746826171875 for `printf("%.51f", 3.14)`), so they are useless for our purpose.

unnah 9/12/2025|||
Ok, you got me there. Looks like they fixed that in C++17 with std::to_chars. https://en.cppreference.com/w/cpp/utility/to_chars.html
Sharlin 9/12/2025|||
That's what the %g format specifier is for.

  printf("%f\n", 3.14); // 3.140000
  printf("%g\n", 3.14); // 3.14
lifthrasiir 9/12/2025||
This is a common misconception. Quoting the ISO C (actually the final draft of C23, but should be enough for this purpose):

> g, G: A double argument representing a floating-point number is converted in style f or e (or in style F or E in the case of a G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X: if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1). otherwise, the conversion is with style e (or E) and precision P − 1.

Note that it doesn't say anything about, say, the inherent precision of number. It is a simple remapping to %f or %e depending on the precision value.

Sharlin 9/12/2025||
Hmm, is that then just an extension that's de facto standard? Every compiler I tried at godbolt.org prints 3.140000 and 3.14 respectively.
lifthrasiir 9/12/2025||
3.14 is the correct answer for both %g and the shortest possible representation. Try 1.0 / 3.0 instead; it won't show all digits required for round-tripping.
Etherlord87 9/12/2025|||
Don't worry about negative comments, yes, this is not the best way, but (if there's no error, I didn't analyze it thoroughly) it's often good enough; if it works, then it works.

I once wanted to find a vector for which Euler rotation (5°, 5°, 0) will result with the same vector, so I just ran a loop of million iterations or so which would take a starting vector, translate it randomly slightly (add a small random vector) and see if after rotation it's closer to original than the previous vector would be after rotation, if not, discard the change otherwise keep it. The script ran for a couple seconds on Python and with decreasing translation vector based on iteration number, I got perfect result (based on limited float precision of the vector). 2s would be terribly slow in a library but completely satisfying for my needs :D

jjcob 9/15/2025|||
Thank you for your encouragement :)

I read a bit more about the topic, and it seems that the issue with my approach is that the decimal representation might end up exactly halfway between two floats, and then the result of parsing it depends on the rounding mode that the parser uses. (By default scanf should use round-to-even, but I'm not sure all implementations do so)

In the PostgreSQL docs I found a curious fact: They use an algorithm that makes sure the printed decimal is never exactly half way between two representable floats, so the result of scanning the decimal representation doesn't depend on the rounding mode.

oasisaimlessly 9/12/2025||||
FYI, you can solve your problem in closed form by converting your Euler angles into the 'rotation vector'[1] or 'axis-angle'[2] representations and then normalizing the resulting vector.

[1]: https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representat...

[2]: https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representat...

Etherlord87 9/12/2025||
Cool! Thanks for that; it makes perfect sense: if you convert the rotation to axis-angle, and then take that axis, obviously a vector rotated around the axis defined by itself won't change.

I just tested it:

    from bpy import context as C
    from mathutils import Vector, Euler
    from math import radians as rad

    r = Euler((rad(5), rad(5), 0))
    ob = C.object
    ob.rotation_euler = r
    ob.rotation_mode = 'AXIS_ANGLE'
    a, x, y, z = ob.rotation_axis_angle
    v = Vector((x, y, z))
    print(v)
    v.rotate(r)
    print(v)
    print("--")

Can be done without using an object:

    from mathutils import Vector, Euler
    from math import radians as rad

    r = Euler((rad(5), rad(5), 0))
    v = Vector(r.to_quaternion().axis)
    print(v)
    v.rotate(r)
    print(v)
    print("--")
Sharlin 9/12/2025|||
I guess that's faster than working out the eigenvectors by hand.
IshKebab 9/12/2025|||
> I wonder if there is a more efficient way to determine that shortest representation rather than running printf/scanf in a loop?

Yes, just `printf("%f", ...);` will get you that.

The actual algorithms to do the float->string conversion are quite complicated. Here is a recent pretty good one: https://github.com/ulfjack/ryu

I think there's been an even more recent one that is even more efficient than Ryu but I don't remember the name.

forrestthewoods 9/12/2025|||
std::numeric_limits<float>::max_digits10

https://en.cppreference.com/w/cpp/types/numeric_limits/max_d...

burnt-resistor 9/12/2025||
No, just no. And, never use sscanf().

This is totally pointless when serialization to and from an unsigned integer that's binary equivalent would be perfectly reversible and therefore wouldn't lose any information.

    double f = 0.0/0.0; // might need some compiler flags to make this a soft error.
    double g;
    char s[9];

    assert(sizeof double == sizeof uint64_t);

    snprintf(s, 9, "%0" PRIu64, *(uint64_t *)(&f));

    snscanf(s, 9, "%0" SCNu64, (uint64_t *)(&g));
If you want something shorter, apply some sort of heuristic that doesn't sacrifice faithful reproduction of the original representation, e.g., idempotency.
Lvl999Noob 9/12/2025|||
Off-topic, For this kind of pointer casting, shouldn't you be using a union? I believe this is undefined behaviour, as written.
Someone 9/12/2025|||
As written, it is UB, yes, but certainly in C++, and, I think, also in C, using a union is undefined behavior, too. I think (assuming isn’t and float to be of the same size) the main risk is that, if you do

   union {
     float f;
     int i;
   } foo;
   foo.f = 3.14;
   printf(“%x”, foo.i);
that the compiler can think the assignment to foo.f isn’t used anywhere, and thus can chose not to do it.

In C++, you have to use memmove (compilers can and often do recognize that idiom)

eska 9/12/2025|||
Yes, it violates the standard, although in practice it should work because the alignment is the same.
lifthrasiir 9/12/2025|||
If the goal is just to reliably serialize and deserialize floating point numbers, use `%a` instead. It will print and read hexadecimal floating point literals.
ridiculous_fish 9/12/2025||
My favorite FP Fun Fact is that float comparisons can (almost) use integer comparisons. To determine if a > b, reinterpret a and b as signed ints and just compare those like any old ints. It (almost) works!

The implication is that the next biggest float is (almost) always what you get when you reinterpret its bits as an integer, and add one. For example, start with the zero float: all bits zero. Add one using integer arithmetic. In int-speak it's just one; in float-speak it's a tiny-mantissa denormal. But that's the next float; and `nextafter` is implemented using integer arithmetic.

Learning that floats are ordered according to integer comparisons makes it feel way more natural. But of course there's the usual asterisks: this fails with NaNs, infinities, and negative zero. We get a few nice things, but only a few.

Sniffnoy 9/12/2025||
This isn't accurate. It's true for positive numbers, and when comparing a positive to a negative, but false for comparisons between negative numbers. Standard floating point uses sign-magnitude representation, while signed integers these days use 2s-complement. On negative numbers, comparisons are reversed between these two encodings. Incrementing a float as if it were an integer will, in ordinary circumstances, get you the next one larger in magnitude, but with the same sign -- i.e., you go up for positives but down for negatives. Whereas with signed integers, you always go up except when there's an overflow into the sign bit.

A more correct version of the statement would be that comparison is the same as on sign-magnitude integers. Of course, this still has the caveats you already mentioned.

adgjlsfhk1 9/12/2025|||
One of the unambiguously nice things about Posits (unlike floats) is that they use a 2s compliment scheme which makes it actually true for all values that they sort like integers.
Sniffnoy 9/12/2025||
I was going to say "well, you've still got NaR", but apparently that's been defined to sort less than all other posits? Huh, OK.
adgjlsfhk1 9/12/2025||
yeah. having a total order just makes everything so much nicer. total order is one of the defining properties of the reals, and realistically if the user calls sort (or puts one in a B-tree), you have to put the NaNs at one side or the other (unless you're C/C++ and allow that to launch the nukes)
ridiculous_fish 9/12/2025|||
You're right, thank you for the correction.
Sharlin 9/12/2025|||
For what it's worth, here's the Rust std implementation [1] of the total (as in, makes NaNs comparable) comparison algorithm given by IEE 751:

  let mut left = self.to_bits() as i32;
  let mut right = other.to_bits() as i32;

  // In case of negatives, flip all the bits except the sign
  // to achieve a similar layout as two's complement integers

  left ^= (((left >> 31) as u32) >> 1) as i32;
  right ^= (((right >> 31) as u32) >> 1) as i32;

  left.cmp(&right)
[1] https://doc.rust-lang.org/src/core/num/f32.rs.html#1348
splicer 9/12/2025||
Thanks for HexFiend man! I use several times a week.
ForOldHack 9/14/2025||
I seriously do NOT want the two plus hours back I spent diddling ( 1 bit not ) all those bits. I first learned it in binary on an HP, and then had to learn it on an IBM mainframe, and then got an 8087, and it was so incredibly fast, but error crept in for Fourier transforms. Started using extended double to keep the errors at bay. Hilbert spaces here we came.

The killer app was not Lotus 1-2-3 v2, but Turbo Pascal w/ 8087 support. It screamed through tensors and 3D spaces, places we only saw on plotters.

It was not until I got a G3 and used Graphing calculator that I could explore sombrero functions of increasing frequency.

Floating point math is essential, not an option.

SomaticPirate 9/12/2025||
This came during my OMSCS Game AI course as an example of the dangers of using floats to represent game object location. If you get further from the origin point (or another game element you are referencing from) you are losing precision as the float needs to use more of the significand to store the larger value.
efskap 9/12/2025||
I love how that became part of the "mythology" of Minecraft as the "Far Lands", where travelling far from the world origin made terrain generation and physics break down, subtly at first, and less so as you kept going.

It's like the paranormal trope of an expedition encountering things being disconcertingly "off" at first, and then eventually the laws of nature start breaking down as well. All because of float precision.

Terr_ 9/12/2025|||
That makes me think of games where the story places the protagonist in a simulation, which is incredibly convenient for the real-world developers and authors. All the quirks and limitations of the real game (invisible barriers, fail on killing a story-critical NPC, etc.) can be blamed on issues in the fictional one.

For example, the Assassin's Creed series.

Frotag 9/12/2025|||
Was curious what this looked like so did a google.

From the wiki, far lands ("spongy walls of terrain") aren't caused by precision loss but integer overflow in the terrain generation.

https://minecraft.wiki/w/Far_Lands

pixelfarmer 9/12/2025|||
If you have a large set of lets say floats in the range between 0 and 1 and you add them up, there is the straightforward way to do it and there is a way to pair them all up, add the pairs, and repeat that process until you have the final result. You will see that this more elaborate scheme actually gets a way more accurate result vs. simply adding them up. This is huge, because there is a lot of processing done with floats where this inherent issue is entirely disregarded (and has an impact on the final result).

Donald Knuth has all of that covered in one of his "The Art of Computer Programming" books, with estimations about the error introduced, some basic facts like a + (b + c) != (a + b) + c with floats and similar things.

And believe it or not, there have been real world issues coming out of that corner. I remember Patriot missile systems requiring a restart because they did time accounting with floats and one part of the software didn't handle the corrections for it properly, resulting in the missiles going more and more off-target the longer the system was running. So they had to restart them every 24 hours or so to keep that within certain limits until the issue got fixed (and the systems updated). There have been massive constructions breaking apart due to float issues (like material thicknesses calculated too thin), etc.

KeplerBoy 9/12/2025||
There are some really simple examples of that. Just try adding 1 to a half precision float in a loop. The accumulator will stop increasing at a mere 2048, since 2049 is not representable it rounds 2048 + 1 back down to 2048.
ForOldHack 9/14/2025||
So as the sum grows with the square, the error function grows in the same magnitude. A very interesting excersise. I'll get an AI to choke and puke on that.
mwkaufma 9/12/2025|||
Define boundary conditions -- how much precision do you need? Then you can compute the min/max distances. If the "world" needs to be larger, then prepare to divide it into sectors, and have separate global/local coordinates (e.g. No Man's Sky works this way).

Really though, games are theater tech, not science. Double-Precision will be more than enough for anything but the most exotic use-case.

The most important thing is just to remember not to add very-big and very-small numbers together.

kbolino 9/12/2025|||
The problem with double-precision in video games is that the GPU hardware does not support it. So you are plagued with tedious conversions and tricks like "global vs. local coordinates" etc.
mwkaufma 9/12/2025||
100! OTOH - Constant factor scaling between game and render world space fixes a lot (gfx often need less precision than physics). - most view coords are in view or clip space, which are less impacted, so large world coords tend not to code-sprawl even when introduced.
skeezyboy 9/12/2025|||
> Define boundary conditions -- how much precision do you need?

imagine if integer arithmetic gave wrong answers in certain conditions lol why did we choose the current compromise?

kbolino 9/12/2025|||
In my experience, most code that operates on integers does not anticipate overflow or wraparound. So it is almost always guaranteed to produce wrong results when these conditions occur, and is only saved by the fact that usually they doesn't occur in practice.

It is odd to me that every major CPU instruction set has ALU codes to indicate when these conditions have occurred, and yet many programming languages ignore them entirely or make it hard to access them. Rust at least has the quartet of saturating, wrapping, checked, and unchecked arithmetic operations.

ForOldHack 9/14/2025||
The trick is to get your ALUs to do some of the math for you. Oh I miss the days of the 68020 fast barrel shifter and the 68030 byte smears. Tricky stuff lost to the silicon/sands of time.
ForOldHack 9/14/2025||||
Compromises. We had BCD for finance, binary for games, and floating point for math. I wrote a sample 'make change' using floating, BCD, and integer( normalizing by multiplying by 100). The integer ripped thru it, but surprisingly BCD kept up with FP, and with compiler optimizations, in certain edge cases and unit tests was significantly faster.

You get surprising things with common place problems.

cindyllm 9/14/2025||
[dead]
mwkaufma 9/12/2025|||
They're not "wrong" -- the error bars are well-defined.

Signed Integer Overflow OTOH is Undefined Behavior, so it's worse.

danhau 9/12/2025||
Kerbal Space Program required a lot of clever engineering to cram an entire solar system into the constrains of 32 bit floats. There are articles and videos out there, highly recommended!
mrguyorama 9/13/2025||
Didn't they just do the normal "The world is centered on you" trick? And then switched to 64 bit coordinates anyway?
danhau 9/19/2025||
That's one of the things, yeah. IIRC they tried 64-bit but gave up on it, one reason being the rendering code was hardwired to 32-bit, so there was no real benefit in the end.

There are other solutions they had to find. The game has two "orbital mechanics engines". One runs a physics simulation and the other is just ellipse math, making the rocket "move on rails". That's why you can't time warp under a certain altitude (that's the real physics simulation).

There are also things in rendering, how they faked the large-scale graphics of Kerbin, involving problems with the depth buffer I believe.

omoikane 9/12/2025||
Previously I was using this site to explore floating point representations:

https://www.h-schmidt.net/FloatConverter/IEEE754.html

This one has the extra feature of showing the conversion error, but it doesn't support double precision.

Etherlord87 9/12/2025|
I was looking through the comments to see if someone already mentioned it. Great webpage!

However the one in OP has an amazing graph intuitively explaining the numeric space partitioning - the vertical axis is logarithmic, and the horizontal is linear for each row on its own, but rows are normalized to fit the range between the logarithmic values on the vertical axis. I guess it's obvious once you're comfortably understanding floats and could do with some explanations for those still learning it.

yuvadam 9/12/2025||
This is cool, looks visually a lot like a CIDR range calculator [1] I built a few years ago to help me understand network ranges better.

These types of visualizations are super useful.

[1] - https://cidr.xyz

burnt-resistor 9/12/2025||
Far too superficial because it lacks explanation of non-normal conditions (denormals, zeroes, infinities, sNaNs, and qNaNs) and the complete mapping of values. This isn't properly educational.
ForOldHack 9/14/2025|
"An sNaN (signaling Not-a-Number) is a special floating-point value that is designed to trigger a hardware trap or exception when it is used in an arithmetic operation. This differs from a qNaN (quiet Not-a-Number), which propagates through calculations without causing an immediate exception. Handling sNaNs requires a more deliberate approach that involves enabling floating-point traps and writing a custom trap handler."

Just learned something. Thanks.

slater 9/12/2025||
Why tf is there a .exposed TLD?
shoo 9/12/2025||
> THE .EXPOSED TLD This TLD is attractive and useful to end-users as it better facilitates search, self-expression, information sharing and the provision of legitimate goods and services. Along with the other TLDs in the Donuts family, this TLD will provide Internet users with opportunities for online identities and expression that do not currently exist. In doing so, the TLD will introduce significant consumer choice and competition to the Internet namespace – the very purpose of ICANN’s new TLD program.

> .EXPOSED will be utilized by registrants seeking new avenues for expression on the Internet. There is a deep history of progressivity and societal advancement resulting from the online free expressions of criticism. Individuals and groups will register names in .EXPOSED when interested in editorializing, providing input, revealing new facts or views, interacting with other communities, and publishing commentary.

-- https://icannwiki.org/.exposed

Joker_vD 9/12/2025||
> better facilitates... the provision of legitimate goods and services.

Like what? A risque lingerie shop at balls.exposed or something? And new TLDs don't in any way facilitate "better search", you know, nor "information sharing".

> Along with the other TLDs in the Donuts family

Sorry, the what family?

> online identities and expression that do not currently exist.

What does this phrase even mean?

> the TLD will introduce significant consumer choice and competition to the Internet namespace – the very purpose of ICANN’s new TLD program.

"Considered harmful" etc.

> Individuals and groups will register names in .EXPOSED when interested in editorializing, providing input, revealing new facts or views, interacting with other communities, and publishing commentary.

Still not sure how "provision of legitimate goods" fits into this. Or the floating point formats, for that matter.

maxbond 9/12/2025||
For the same reason there's a .sucks TLD. There's a market for it.
magackame 9/12/2025||
So windows.sucks and linux.sucks are available and 2000 USD/year, emacs.sucks is 200 USD/year and vi.sucks is already registered (but no website unfortunately)!

On the other hand linux.rocks and windows.rocks are taken (no website), vi.rocks is 200 USD/year and emacs.rocks is just 14 USD/year.

microsoft.sucks redirects to microsoft.com, but microsoft.rocks is just taken :thinking:

8cvor6j844qw_d6 9/12/2025||
Pretty sure there's a domain monitoring service for similarly or something along these lines that buys up domains like these to prevent usage.

On that note, I've been trying to see if GoDaddy will buy a domain and resell for higher price by searching for some plausibly nice domain names on their site. They haven't took the "bait" yet.

15155 9/12/2025||
MarkMonitor
CraigJPerry 9/12/2025||
The award for "most fun integer" in 32 bit float is 16777217 (and 9007199254740992 for 64bit).

It's sometimes fun to have these kinds of edge cases up your sleeve when testing things.

lifthrasiir 9/12/2025||
For 64-bit, 9007199254740991 is also known as `Number.MAX_SAFE_INTEGER` in JavaScript. Note that this value is not even; the next integer, ..0992 is still safe to represent by its own, but it can't be distinguished from a definitely unsafe integer ..0993 rounded to ..0992.
simonask 9/12/2025||
You mean ±9,007,199,254,740,993.0 for 64-bit floats. :-)

For other curious readers, these are one beyond the largest integer values that can be represented accurately. In other words, the next representable value away from zero after ±16,777,216.0 is ±16,777,218.0 in 32 bits -- the value ±16,777,217.0 cannot be represented, and will be rounded somewhat hand-wavingly (usually towards zero).

Precision rounding is one of those things that people often overlook.

porker 9/12/2025|
It doesn't seem to have been shared in this thread but my favourite site on this topic is https://0.30000000000000004.com/
More comments...