This question already has answers here:
Closed 6 years ago.
So when I add or subtract in Java with Doubles, it giving me strange results. Here are some:
If I add 0.0 + 5.1
, it gives me 5.1
. That's correct.
If I add 5.1 + 0.1
, it gives me 5.199999999999
(The number of repeating 9
s may be off). That's wrong.
If I subtract 4.8 - 0.4
, it gives me 4.39999999999995
(Again, the repeating 9
s may be off). That's wrong.
At first I thought this was only the problem with adding doubles with decimal values, but I was wrong. The following worked fine:
5.1 + 0.2 = 5.3
5.1 - 0.3 = 4.8
Now, the first number added is a double saved as a variable, though the second variable grabs the text from a JTextField
. For example:
//doubleNum = 5.1 RIGHT HERE
//The textfield has only a "0.1" in it.
doubleNum += Double.parseDouble(textField.getText());
//doubleNum = 5.199999999999999
In Java, double
values are IEEE floating point numbers. Unless they are a power of 2 (or sums of powers of 2, e.g. 1/8 + 1/4 = 3/8), they cannot be represented exactly, even if they have high precision. Some floating point operations will compound the round-off error present in these floating point numbers. In cases you've described above, the floating-point errors have become significant enough to show up in the output.
It doesn't matter what the source of the number is, whether it's parsing a string from a JTextField
or specifying a double
literal -- the problem is inherit in floating-point representation.
Workarounds:
If you know you'll only have so many decimal points, then use integer
arithmetic, then convert to a decimal:
(double) (51 + 1) / 10
(double) (48 - 4) / 10
Use BigDecimal
If you must use double
, you can cut down on floating-point errors
with the Kahan Summation Algorithm.
In Java, doubles use IEEE 754 floating point arithmetic (see this Wikipedia article), which is inherently inaccurate. Use BigDecimal for perfect decimal precision. To round in printing, accepting merely "pretty good" accuracy, use printf("%.3f", x)
.