Here's an oddity (to me, at least). This routine prints true:
double x = 11.0;
double y = 10.0;
if (x-y == 1.0) {
// print true
} else {
// print false
}
But this routine prints false:
double x = 1.1;
double y = 1.0;
if (x-y == 0.1) {
// print true
} else {
// print false
}
Anyone care to explain what's going on here? I'm guessing it has something to do with integer arithmetic for int
s posing as float
s. Also, are there other bases (other than 10
) that have this property?
1.0 has an exact binary representation. 0.1 does not.
perhaps you are asking why 0.1 is not stored as a mantissa of 1 and an exponent of -10? but that's not how it works. it's not a decimal number plus an exponent, but a binary number. so "times 10" is not a natural thing.
sorry, maybe the last part is unclear. it's better to think of the exponent as a shift of bits. no shift of bits will convert an infinite sequence like 0.1 (decimal) into a finite one.
Edit
I stand corrected by Andrew. Thank you!
Java follows IEEE 754 with a Base of 2, so it cannot represent 0.1 correctly (it is aprox. 0.1000000000000000055511151231257827021181583404541015625
or 1.1001100110011001100110011001100110011001100110011010 * 2^-4
in IEEE) which you can find out based on the binary representation of the double like this (bit 63
= sign, bits 62-52
= exponent and bits 51-0
being the mantissa):
long l = Double.doubleToLongBits(0.1);
System.out.println(Long.toBinaryString(l));
I just got carried away by the results and I thought for a moment that the floats in Java are working with a Base of 10 in which case it would have been possible to represent 0.1 just fine.
And now to hopefully clear the things once and for all, here's what goes on:
BigDecimal bigDecimal1 = new BigDecimal(0.1d);
BigDecimal bigDecimal2 = new BigDecimal(1.1d - 1.0);
BigDecimal bigDecimal3 = new BigDecimal(1.1d);
BigDecimal bigDecimal4 = new BigDecimal(1.0d);
System.out.println(bigDecimal1.doubleValue());
System.out.println(bigDecimal2.doubleValue());
System.out.println(bigDecimal3.doubleValue());
System.out.println(bigDecimal4.doubleValue());
System.out.println(bigDecimal1);
System.out.println(bigDecimal2);
System.out.println(bigDecimal3);
System.out.println(bigDecimal4);
Outputs:
0.1
0.10000000000000009
1.1
1.0
0.1000000000000000055511151231257827021181583404541015625
0.100000000000000088817841970012523233890533447265625
1.100000000000000088817841970012523233890533447265625
1
So what happens? 1.1 - 1.0 is equivalent to:
1.100000000000000088817841970012523233890533447265625 - 1
(Java can't represent 1.1 precisely) which is 0.100000000000000088817841970012523233890533447265625
and this is different than the way Java represent 0.1 internally (0.1000000000000000055511151231257827021181583404541015625
)
If you're wondering why the result of the subtraction is being displayed as 0.10000000000000009
and the "0.1" is displayed as it is, have a look over here
This comes up in currency calculations all the time. Use BigDecimal if you need exact numerical representation at the cost of not having hardware enabled performance, of course.