This question already has an answer here:
- Retain precision with double in Java 20 answers
Seems like the subtraction is triggering some kind of issue and the resulting value is wrong.
double tempCommission = targetPremium.doubleValue()*rate.doubleValue()/100d;
78.75 = 787.5 * 10.0/100d
double netToCompany = targetPremium.doubleValue() - tempCommission;
708.75 = 787.5 - 78.75
double dCommission = request.getPremium().doubleValue() - netToCompany;
877.8499999999999 = 1586.6 - 708.75
The resulting expected value would be 877.85.
What should be done to ensure the correct calculation?
As the previous answers stated, this is a consequence of doing floating point arithmetic.
As a previous poster suggested, When you are doing numeric calculations, use
java.math.BigDecimal
.However, there is a gotcha to using
BigDecimal
. When you are converting from the double value to aBigDecimal
, you have a choice of using a newBigDecimal(double)
constructor or theBigDecimal.valueOf(double)
static factory method. Use the static factory method.The double constructor converts the entire precision of the
double
to aBigDecimal
while the static factory effectively converts it to aString
, then converts that to aBigDecimal
.This becomes relevant when you are running into those subtle rounding errors. A number might display as .585, but internally its value is '0.58499999999999996447286321199499070644378662109375'. If you used the
BigDecimal
constructor, you would get the number that is NOT equal to 0.585, while the static method would give you a value equal to 0.585.on my system gives
This is a fun issue.
The idea behind Timons reply is you specify an epsilon which represents the smallest precision a legal double can be. If you know in your application that you will never need precision below 0.00000001 then what he suggests is sufficient to get a more precise result very close to the truth. Useful in applications where they know up front their maximum precision (for in instance finance for currency precisions, etc)
However the fundamental problem with trying to round it off is that when you divide by a factor to rescale it you actually introduce another possibility for precision problems. Any manipulation of doubles can introduce imprecision problems with varying frequency. Especially if you're trying to round at a very significant digit (so your operands are < 0) for instance if you run the following with Timons code:
Will result in
1499999.9999999998
where the goal here is to round at the units of 500000 (i.e we want 1500000)In fact the only way to be completely sure you've eliminated the imprecision is to go through a BigDecimal to scale off. e.g.
Using a mix of the epsilon strategy and the BigDecimal strategy will give you fine control over your precision. The idea being the epsilon gets you very close and then the BigDecimal will eliminate any imprecision caused by rescaling afterwards. Though using BigDecimal will reduce the expected performance of your application.
It has been pointed out to me that the final step of using BigDecimal to rescale it isn't always necessary for some uses cases when you can determine that there's no input value that the final division can reintroduce an error. Currently I don't know how to properly determine this so if anyone knows how then I'd be delighted to hear about it.
Although you should not use doubles for precise calculations the following trick helped me if you are rounding the results anyway.
Example:
Which prints:
More info: http://en.wikipedia.org/wiki/Double_precision
So far the most elegant and most efficient way to do that in Java:
Better yet use JScience as BigDecimal is fairly limited (e.g., no sqrt function)