I have a double "138630.78380386264"
and I want to convert it to a decimal, however when I do so I do it either by casting or by using Convert.ToDecimal()
and I lose precision.
What's going on? Both decimal and double can hold this number:
double doub = double.Parse("138630.78380386264");
decimal dec = decimal.Parse("138630.78380386264");
string decs = dec.ToString("F17");
string doubse =DoubleConverter.ToExactString(doub);
string doubs = doub.ToString("F17");
decimal decC = (decimal) doub;
string doudeccs = decC.ToString("F17");
decimal decConv = Convert.ToDecimal(doub);
string doudecs = decConv.ToString("F17");
Also: how can I get the ToString()
on double to print out the same result as the debugger shows? e.g. 138630.78380386264
?
138630.78380386264
is not exactly representable to double precision. The closest double precision number (as found here) is 138630.783803862635977566242218017578125
, which agrees with your findings.
You ask why the conversion to decimal does not contain more precision. The documentation for Convert.ToDecimal()
has the answer:
The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest. The following example illustrates how the Convert.ToDecimal(Double) method uses rounding to nearest to return a Decimal value with 15 significant digits.
The double value, rounded to nearest at 15 significant figures is 138630.783803863
, exactly as you show above.
It is a unfortunate, I think. Near 139,000, a Decimal
has far better precision than a Double
. But still, because of this issue, we have different Double
s being projected onto the same Decimal
. For example
double doub1 = 138630.7838038626;
double doub2 = 138630.7838038628;
Console.WriteLine(doub1 < doub2); // true, values differ as doubles
Console.WriteLine((decimal)doub1 < (decimal)doub2); // false, values projected onto same decimal
In fact there are six different representable Double
values between doub1
and doub2
above, so they are not the same.
Here is a somewhat silly work-aronud:
static decimal PreciseConvert(double doub)
{
// Handle infinities and NaN-s first (throw exception)
// Otherwise:
return Decimal.Parse(doub.ToString("R"), NumberStyles.AllowExponent | NumberStyles.AllowDecimalPoint);
}
The "R"
format string ensures that enough extra figures are included to make the mapping injective (in the domain where Decimal
has superior precision).
Note that in some range, a long
(Int64
) has a precision that is superior to that of Double
. So I checked if conversions here are made in the same way (first rounding to 15 significant decimal places). They are not! So:
double doub3 = 1.386307838038626e18;
double doub4 = 1.386307838038628e18;
Console.WriteLine(doub3 < doub4); // true, values differ as doubles
Console.WriteLine((long)doub3 < (long)doub4); // true, full precision of double used when converting to long
It seems inconsistent to use a different "rule" when the target is decimal
.
Note that because of this, (decimal)(long)doub3
produces a more accurate result than just (decimal)doub3
.