I am quite confused about this peculiar 'error' I am getting when parsing a String to a Double.
I've already set up the NumberFormat
properties and symbols.
When passing a String with 15 digits and 2 decimals (ex. str = "333333333333333,33"
)
and parsing it with Number num = NumberFormat.parse(str)
the result is omitting a digit.
The actual value of num is 3.333333333333333E14
.
It seems to be working with Strings with all 1's, 2's and 4's though...
Anyone can enlighten me?
Cheers Enrico
The
DecimalFormat.parse
method will in this case return aDouble
, which has limited precision.You can't expect it to always be able to return a Number that represents the input exactly.
You can use
BigDecimal.setParseBigDecimal
to allow the number format to return aBigDecimal
from the parse method. ThisNumber
is capable of representing your values with arbitrary precision. (Thanks @Peter Lawrey for pointing that out!)The short answer; due to round error
but
If you want more precision, use setParseBigDecimal and parse will return a BigDecimal.
Why does this happen? This is because you are at the limit of the precision of double. The 17 ones is fine as it can just be represented. The 2's is just double this and as double stores powers of two, every power of two of all 17 ones, so 17 fours and 17 eights is fine.
However, 17 threes takes one more bit than double has to represent the value and this last bit is truncated. Similarly 17 fives, sixes and nines also have rounding errors.
prints the following. The
double
is rounded slightly before printing and the BigDecimal shows you the exact values the double represents.