Imagine the following situation. Amounts are added to a total in a loop:
float total = 0;
float[] amounts = new float[]{...};
foreach(float amount in amounts)
{
total += amount;
}
total
, as wel as all the amount
s are written to a database.
When I calculate SUM(amount)
in SQL, it results in a value that differs from total
.
Moreover, when I do the same calculation in C#, but this time adding the amounts to a value of type double,
double total = 0;
//the rest of the code is the same as above
then total
represents the correct value.
Could this be caused by the difference in precision between a float and a double?
Note that it depends on the values. Most of the results of this calculation are correct.
Yes. SQL Server's float size is variant but you likely have 8 byte floats in the database. When you use doubles ( 8 bytes ) in the C# code the results are the same, when you use floats ( 4 bytes ) they're different. double values within the range of a float will not be the same as the float values.
Yes. It can also sometimes simply be because not all values can be represented in an IEEE floating point; in many scenarios where you expect "exact" sums (money, etc), it is usually better to use
decimal
. However, that shouldn't to be interpreted to mean thatdecimal
is "exact" - simply that the way it rounds is more conducive to how we humans think of rounding.