My question, in short is, why does rounding error in floats only show up after calculations and not for storage of literals?
What I mean is this - I know about the issues that arise due to rounding error in floats when converting from decimal to binary and back.
Eg, in Java:
double a = 10.567;
double b = 2.16;
double c = a * b;
c then stores the value 22.824720000000003, instead of 22.82472.
This is because the result 22.82472 cannot be stored accurately in the finite binary digits of the double type. However, neither can 10.567 and 2.16 (i.e. a and b).
But if I print the values of a and b, they are printed out exactly, without any rounding error. Why does rounding error not show up here?
Does this mean that the representation of float literals is somehow different from the representation of float calculation results?
There is rounding error in the conversion of literals, it just happens to be hidden from you. 10.567 can't be represented in binary, so instead it rounds to the nearest representable
double
value which is10.56700000000000017053025658242404460906982421875
However rather than printing the exact value (which would be rather annoying), the algorithm prints the fewest possible digits such that if it was converted back to binary it would give the same value (which in this case is "10.567").
It turns out that for
double
s, you can do this for any decimal with up to 15 digits, see http://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/.