Here is a sample piece of code with outputs from .net core 2.2 and 3.1. It shows different computational results for a basic floating point expression a^b.
In this example we calculate 1.9 to the power of 3. Previous .NET frameworks yielded the correct result, but .net core 3.0 and 3.1 yields a different result.
Is this an intended change and how can we migrate financial calculation code to the new version with a guarantee that numerical calculations will still yield the same results? (It would be nice if .NET had a decimal Math library too).
public static class Program
{
public static void Main(string[] args)
{
Console.WriteLine("--- Decimal ---------");
ComputeWithDecimalType();
Console.WriteLine("--- Double ----------");
ComputeWithDoubleType();
Console.ReadLine();
}
private static void ComputeWithDecimalType()
{
decimal a = 1.9M;
decimal b = 3M;
decimal c = a * a * a;
decimal d = (decimal) Math.Pow((double) a, (double) b);
Console.WriteLine($"a * a * a = {c}");
Console.WriteLine($"Math.Pow((double) a, (double) b) = {d}");
}
private static void ComputeWithDoubleType()
{
double a = 1.9;
double b = 3;
double c = a * a * a;
double d = Math.Pow(a, b);
Console.WriteLine($"a * a * a = {c}");
Console.WriteLine($"Math.Pow(a, b) = {d}");
}
}
.NET Core 2.2
--- Decimal ---------
a * a * a = 6.859
Math.Pow((double) a, (double) b) = 6.859
--- Double ----------
a * a * a = 6.859
Math.Pow(a, b) = 6.859
.NET Core 3.1
--- Decimal ---------
a * a * a = 6.859
Math.Pow((double) a, (double) b) = 6.859
--- Double ----------
a * a * a = 6.858999999999999
Math.Pow(a, b) = 6.858999999999999
.NET Core introduced a lot of floating point parsing and formatting improvements in IEEE floating point compliance. One of them is IEEE 754-2008 formatting compliance.
Before .NET Core 3.0,
ToString()
internally limited precision to "just" 15 places, producing string that couldn't be parsed back to the original. The question's values differ by a single bit.In both .NET 4.7 and .NET Core 3, the actual bytes remains the same. In both cases, calling
Produces
On the other hand,
BitConverter.GetBytes(6.859)
produces :Even in .NET Core 3, parsing "6.859" produces the second byte sequence :
This is a single bit difference. The old behavior produced a string that couldn't be parsed back to the original value
The difference is explained by this change :
That's why we always need to specify a precision when dealing with floating point numbers. There were improvements in this case too :
Using
ToString("G15")
produces6.859
whileToString("G16")
produces6.858999999999999
, which has 16 fractional digits.That's a reminder that we always need to specify a precision when working with floating point numbers, whether it's comparing or formatting