This question already has an answer here:
- When should I use double instead of decimal? 12 answers
I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision. My question is when should a use a double and when should I use a decimal type? Which type is suitable for money computations? (ie. greater than $100 million)
decimal
for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.double
for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes -double
covers more than 10^(+/-300). Scientific calculations are the best example here.decimal, decimal, decimal
Accept no substitutes.
The most important factor is that
double
, being implemented as a binary fraction, cannot accurately represent manydecimal
fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit fordecimal
. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law).decimal
supports these;double
does not.Decimal is for exact values. Double is for approximate values.
For money, always decimal. It's why it was created.
If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2
http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html
System.Single / float - 7 digits
System.Double / double - 15-16 digits
System.Decimal / decimal - 28-29 significant digits
The way I've been stung by using the wrong type (a good few years ago) is with large amounts:
You run out at 1 million for a float.
A 15 digit monetary value:
9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:
When should I use double instead of decimal? has some similar and more in depth answers.
Using
double
instead ofdecimal
for monetary applications is a micro-optimization - that's the simplest way I look at it.For money:
decimal
. It costs a little more memory, but doesn't have rounding troubles likedouble
sometimes has.