Most ints are not floats

39 zdw 62 6/27/2025, 3:57:45 PM johndcook.com ↗

Comments (62)

petermcneeley · 4h ago
Another unfortunate fact is that max int (signed and unsigned) is also not a float. This means you cannot write clamped ftoi conversion only in floating point (because the value is not representable). This is why webgpu (wgsl) does not fully saturate on ftoi

https://www.w3.org/TR/WGSL/#floating-point-conversion

squirrellous · 41m ago
True but misleading. Like one of the other commenters also mentioned, most integers are small. For 32-bit you might run into integers above 2^23 bits once in a while, but for 64-bits it’s really not that common to have integers above 2^53, unless it’s a bit pattern instead of a natural number. So you could reasonably say “most integers _are_ 64-bit floats.”
taeric · 2d ago
It is a shame we tend to teach floats as the computer version of reals. Thinking of them as "scientific numbers" really helps a ton with this.
atn34 · 6h ago
I want to make a Birds Aren't Real[0] style t-shirt that says "Floats Aren't Real"

[0]: https://en.wikipedia.org/wiki/Birds_Aren%27t_Real

tialaramex · 5h ago
I would buy this T-shirt. I would also take "Floats Aren't Normal"
munchler · 6h ago
True, but we also have to be careful about teaching ints as the computer version of integers.
MathMonkeyMan · 5h ago
Unsigned ints are the non-negative integers mod 2^n.

Signed ints behave like the integers in some tiny subset of representable values. Maybe it's something like the interval (-sqrt(INT_MAX), sqrt(INT_MAX)).

sjrd · 4h ago
Signed ints are also the integers mod 2^n. The beauty of modular arithmetics is that it's all equivalent. At least for all the operations that work in modular arithmetics in the first place. They just have different canonical representatives for their respective equivalence classes, which are used for the operations that don't work in modular arithmetics (like divisions, comparisons or conversions to string with a sign character).
SkeuomorphicBee · 3h ago
Not in C. In C signed integer overflow is underined behaviour that may or may not be compiled to the equivalent of mod arithmetic dependingonthe whims of the compiler.
AlotOfReading · 4h ago
That's one way of looking at them. You can also look at the signed integers as bounded 2-adic numbers.
LegionMammal978 · 4h ago
"Bounded 2-adic integers" would only make sense if you were bounding the 2-adic norm. Integers mod 2^n would be closer to "approximate fixed-point 2-adic integers".

(Alas, most languages don't expose a convenient multiplicative inverse for their integer types, and it's a PITA to write a good implementation of the extended Euclidean algorithm every time.)

jrvieira · 5h ago
wait, why?
neepi · 5h ago
Proper integers aren’t bounded. Computer ints are.
jrvieira · 4h ago
Unbounded integer types exist, which have infinite precision in theory and are only limited by available memory in practice.

You can make the argument that "proper" integers are also bounded in practice by limitations of our universe :)

layer8 · 2h ago
Unbounded integer types aren't ints.

The important point is that the arithmetic operators on int perform modulo arithmetics, not the normal arithmetics you would expect on unbounded integers. This is often not explained when first teaching ints.

analog31 · 2h ago
The pitfalls of floats were taught when I was a college math major. That was in the mid 80s.
perching_aix · 6h ago
Are they even reals? Math classes were a while ago at this point, but I'm fairly convinced they're just rationals. Not trying to be pedantic, just wondering.
ants_everywhere · 5h ago
I think the parent comment is saying it's confusing to associate floats with decimals like 0.123.

Instead it's more accurate to think of them as being in scientific notation like 1.23E-1.

In this notation it's clearer that they're sparsely populated because some of the 32 bits encode the exponent, which grows and shrinks very quickly.

But yes rationals are reals. It's clear that you can't represent, say, all digits of pi in 32 bits, so the parent comment was not saying that 32 bit floats are all of the reals.

perching_aix · 5h ago
Yeah, that's fair. Personally, I like to think of them as a log compressed way of expressing fractional values, like how one would record in log with a camera to capture rich dark scenes while maintaining highlight detail. I think the bigger problem with floats is that the types and operations around them are pretty loose and permissive, although maybe I just don't appreciate how well the usual compromises work. Was pretty cool to dig into arbitrary precision math libraries a while back though, found some fun stuff in there. Also found out that my Android phone's Calculator app is not calculating in base 10, unlike Windows' Calculator...
jcranmer · 5h ago
Floats [if you ignore -0.0, infinities, and NaNs] are a subset of the rationals, themselves a subset of the real numbers.

It's generally accurate to consider floats an acceptable approximation of the [extended] reals, since it's possible to do operations on them that don't exist for rational numbers, like sqrt or exp.

perching_aix · 5h ago
> since it's possible to do operations on them that don't exist for rational numbers, like sqrt or exp

This kinda sent me on a spin, for a moment I thought my whole life was a lie and these functions don't take rationals as inputs somehow. Then I realized you mean rather that they typically produce non-rationals, so the outputs will be approximated.

neepi · 5h ago
They aren’t reals. They aren’t continuous and are bounded.
MathMonkeyMan · 5h ago
And the operators +, -, *, and / lack some of the properties of addition, subtraction, multiplication, and division.

Tom7 has a good video about this: https://www.youtube.com/watch?v=5TFDG-y-EHs

jrvieira · 5h ago
Yes, all floating numbers are rational. (It's also true that they are all reals but i get your point.)
neepi · 5h ago
They aren’t pure rational either. They are a subset of rational numbers.
perching_aix · 5h ago
I think that's more of how one frames it, no? Like you won't be able to store any arbitrary rational in a float, as you'd need arbitrarily large storage for that. But all the numbers a float can store are rationals (so excluding all the fanciful IEEE-754 features of course).
tialaramex · 5h ago
It's not so much the need for arbitrary storage, the problem is that even easy rationals can't be expressed in the IEEE floats

Take realistic::Rational::fraction(1, 3) ie one third. Floats can't represent that, but we don't need a whole lot of space for it, we're just storing the numerator and denominator.

If we say we actually want f64, the 8 byte IEEE float, we get only a weak approximation, 6004799503160661/18014398509481984 because 3 doesn't go neatly into any power of 2.

Edited: An earlier version of this comment provided the 32-bit fraction 11184811/33554432 instead of the 64-bit one.

dmurray · 7h ago
This should be obvious. There are the same number of 32-bit integers as 32-bit floats [0], so for every float that is not an int, there exists an int that is not a float. Clearly most floats cannot be represented as integers, so the converse must be true as well.

But I still see people building systems where implicit conversation of float to int is not allowed because "it would lose precision", but that allow int to float.

[0] don't reply to me about NaNs, please

bee_rider · 4h ago
It is shocking to think about it, but a lot of programmers don’t think about bits at all, and just think of floats as reals. Or maybe if you prod them enough, they’ll treat the floats as sometimes inaccurate reals.
haiku2077 · 3h ago
I cringe whenever I hear floats described as "decimals" :(
Kranar · 6h ago
Sure but at least in my experience it's rare to convert 32 bit ints to 32 bit float. Usually the conversion is 32 bit int to 64 bit float, which is always safe.
nofriend · 5h ago
Floats have a fixed accuracy everywhere. Ints have a variable accuracy across their range. When you convert from int to float, the number stays accurate to however many digits, say 8 digits for single precision. Ints have a fixed precision everywhere, whereas for floats it varies. A given float might be accurate to one part in 10^30. When you cast to int, you get accuracy to within 1 part in 1. So you lose far more orders of magnitude of precision converting from float to int then you lose in order of magnitude of accuracy converting from int to float.
dmurray · 4h ago
Floats have the same accuracy everywhere when measured in relative error, ints have the same accuracy everywhere when measuring absolutely.

Which of those is better? It depends on the application. All you can do is hope the person who gave you the numbers chose the more appropriate representation, the less painful way to lose fidelity in a representation of the real world. By converting to the other representation you now have lost that fidelity in both ways.

Again, by a similar argument to above, something like this has to be true. You have exactly 32 bits of information either way, and both conversions lose the same amount of information - you end up with a 32-bit representation, but it only represents a domain of 2^27 (or whatever it is) distinct numbers.

titzer · 5h ago
> When you convert from int to float, the number stays accurate to however many digits

The point of the article is that this "however many digits" actually implies rounding many numbers that aren't that big. A single precision (i.e. 32-bit) float cannot exactly represent some 32-bit integers. For example 1_234_567_891f is actually rounded to 1_234_567_936. This is because there are only 23 bits of fraction available (https://en.wikipedia.org/wiki/Single-precision_floating-poin...).

tomtom1337 · 4h ago
You have a typo: In your last sentence you effectively wrote «from int to float» twice in contradicting ways. «To float from int than (…) from int to float».
nofriend · 4h ago
there was an error made when I went back to edit what I wrote...
Aardwolf · 4h ago
> [0] don't reply to me about NaNs, please

The question is why they added so many NaNs to the spec, instead of just one. Probably for a signal, but who actually uses that?

For IEEE float16 the amount of lacking values due to an entire exponent value being needlessly taken up by NaNs is actually quite blatant.

spc476 · 4h ago
You could store the address of the offending instruction/code sequence that generated the NaN (think software-only implementation).
mamcx · 4h ago
> but that allow int to float.

And not think what happened when people do `i64 as usize` and friends

(This is one are where the pascal have it right, including the fact you should do loops like `for I in low(nuts)..high(nuts)`)

p.d: 'nuts' was the autocorrect choice that somehow is topical here so I keep it.

worik · 6h ago
> This should be obvious

Yes

They are different types

They are different things

They are related concepts, that is all

GianFabien · 3h ago
I have been writing software for decades in areas ranging from industrial control to large business systems. I almost never use floats or doubles. In almost all cases 32 bit integers (and sometimes U64int) with some scaling suffices.

Perhaps it is the FORTRAN overhang of engineering education that predisposes folks to using floats when Int32/64 would be fine.

I shudder as to why in JavaScript they made Number a float/double instead of an integer. I constantly struggle to coerce JS to work with integers.

derriz · 5h ago
This is a fairly obvious one? Although I've mainly encountered the effect with long to double conversion.

On the other hand, floating point is the gift that never stops giving.

A recent wtf I encountered was partly caused by the silent/automatic conversion/casting from a float to double. Nearly all C-style languages do this even though it's unsafe in its own way. Kinda obvious to me now and when I state it like this (using C-style syntax), it looks trivial but: (double) 0.3f is not equal to 0.3d

The wtfness of it was mostly caused by other factors (involving overloaded method/functions and parsing user input) but I realized that I had never really thought about this case before - probably because floats are less common than doubles in general - and without thinking about it, sort of assumed it should be similar to int to long conversion for example (which is safe).

sjrd · 4h ago
Float to double conversion is safe. The thing that's not safe in your example is to write `0.3f` (or `0.3d`) in the first place. The conversions from decimal to float or double is unsafe (inexact) and gives different results from floats than for doubles. But the conversion of float to double, in itself, is always exact.
derriz · 2h ago
I do not consider it safe.

The float representation of 0.3 (e.g.) does not, when cast to double, represent 0.3 - in contrast the i32 representation of any number when cast to i64 represents the same number.

secondcoming · 5h ago

    -Wfloat-conversion
will warn about this. Been bitten by it too.
taneq · 4h ago
This sort of thing is why you never compare floating point numbers for equality. Always compare using epsilons appropriately chosen for the circumstances.

Floats are Sneaky and Not to be Trusted. :D

sjrd · 4h ago
Always... except when you're actually writing floating point operations. If I'm implementing `sqrt`, I'd better make sure the result is exactly the expected one. Epsilon is 0 in all my unit tests. ;)
tikhonj · 5h ago
Something I found really annoying in the Avro spec is that they automatically convert between ints/longs and floats/doubles in their backwards compatibility system. That just seemed like an unforced error to me. (Maybe it's changed in newer versions of the standard?)
tedunangst · 3h ago
I'll take the contrary position and argue that most ints are floats, because ints are not uniformly distributed. 0, 1, 10, etc. are far more common.
deckar01 · 5h ago
https://gist.github.com/deckar01/f77d98550eaf5d9b3a954eb0343...

Here is a visualization I made recently on the density of float32. It seems that float32 is basically just PCM, which was a lossy audio compression exploiting the fact that human hearing has logarithmic sensitivity. I’m not sure why they needed the mantissa though. If you give all 31 bits to the exponent, then normalize it to +/-2^7, you get a continuous version of the same function.

tialaramex · 5h ago
So, PCM isn't the thing you meant here. PCM just means Pulse-code modulation, you're probably thinking of a specific non-linear PCM and maybe that was even the default for some particular hardware or software you used, but that's not what PCM itself means and these days almost everything uses Linear PCM.
userbinator · 4h ago
I think G.711 PCM is what the OP meant.
tialaramex · 3h ago
Wow. G.711 is extremely obsolete. Interpreting PCM as G.711 (which is from the 1970s) is about similar to if somebody said "Windows" but meant Windows 2.x the 1980s DOS-based Microsoft GUI. I guess I don't have to feel like I'm the oldest person reading HN.
userbinator · 6m ago
In the telco industry, "PCM" or more precisely "PCMA" and "PCMU", refers to G.711. It's still the default fallback for VoIP applications.
GrantMoyer · 3h ago
Without a mantissa, way too much precision is allocated to the near zero range and not enough to the "near infinity" range. Consider that without a mantissa, the second largest float is only half of the largest float. With a 23 bit mantissa, there are 2^23 floats from half the largest to the largest.
deckar01 · 2h ago
You could change the scaling factor to target any bounds you want. On average the precision is equal. The mantissa just adds linear segments to a logarithmic curve.
timewizard · 5h ago
> It seems that float32 is basically just PCM

float has higher accuracy around 1.0 than around 2*24. This makes it quite a bit different from PCM which is fully linear. Which is probably why floating point PCM keeps it's samples primarily between -1.0 and +1.0.

> which was a lossy audio compression

It's not lossy. Your bit depth simply defines the noise floor which is the smallest difference in volume you can represent. This may result in loss of information but at even 16 bits only the most sensitive of ears could even pretend to notice.

> If you give all 31 bits to the exponent, then normalize it to +/-2^7, you get a continuous version of the same function.

You'll extend the range but loose all the precision. This is probably the opposite of what any IEE754 user actually wants.

deckar01 · 1h ago
Here is what a 31-bit exponent 0-bit mantissa encoding looks like compared to float32:

https://gist.github.com/deckar01/3f93802329debe116b0c3570bed...

dist-epoch · 4h ago
> Which is probably why floating point PCM keeps it's samples primarily between -1.0 and +1.0.

No, it's just it's more natural/intuitive to express algorithms in a normalized range if given the possibility.

Same with floating point RGBA (like in GPUs)

PaulHoule · 2d ago
... but people are in the habit of using doubles. Many languages, like Javascript, only support doubles and int32(s) do embed in doubles.

I have some notes for a fantasy computer which is maybe what would have happened if Chinese people [1] evolved something like the PDP-10 [2] Initially I was wanting a 24-bit wordsize [3] but decided on 48-bit [4] because you can fit 48 bits into a double for a Javascript implementation.

[1] There are instructions to scan UTF-8 characters and the display system supports double-wide bitmap characters that are split into halves that are indexed with 24-bit ints.

[2] It's a load-store architecture but there are instructions to fetch and write 0<n<48 bits out of a word even overlapping two words, which makes [1] possible; maybe that write part is a little unphysical

[3] I can't get over how a possible 24-bit generation didn't quite materialize in the 1980s, and find the eZ80 evokes a kind of nostalgia for an alternate history

[4] In the backstory, it started with a 24-bit address space like the 360 but got extended to have "wide pointers" qualified by an address space identifier (instead of the paging-oriented architecture the industry) really took as well as "deep pointers" which specify a bitmap, 48-bit is enough for a pointer to be deep and wide and have some tag bits. Address spaces can merge together contiguously or not depending on what you put in the address space table.

jerf · 6h ago
"I can't get over how a possible 24-bit generation didn't quite materialize in the 1980s, and find the eZ80 evokes a kind of nostalgia for an alternate history"

Well... it depends on how you look at it.

While the marketers tried to cleanly delineate generations into 8- and 16- and 32-bit eras, the reality was always messier. What exactly the "bits" were that were being measureds was not consistent. The size of a machine word in the CPU was most common, and perhaps in some sense objectively the cleanest, but the number of bits of the memory bus started to sneak in at times (like the "64 bit" Atari Jaguar with the 32-bit CPU because one particular component was 64 bits wide). In reality the progress was always more incremental and there are some 24-bit things, like, the 286 can use 24 bits to access memory, and a lot of "32 bit graphics" is really 24 bits because 8 bits for RGB gets you to 24 bits. The lack of a "24-bit generation" is arguably more about the marketing rhetoric than the lack of things that were indeed based around 24 bits in some way.

Even today our "64-bit CPUs" are a lot messier than meets the eye. As far as I know, they can't actually address 64 bits of RAM, there are some reserved higher bits, and depending on which extensions you have, modern CPUs may be able to chew on up to 512 bits at a time with a single instruction, and I could well believe someone snuck something that can chew on 1024 bits without me noticing.

furyofantares · 5h ago
Well yes. Given N bits:

- Most floats are not ints.

- There are the same number of floats as ints.

- Therefore, most ints are not floats.