I can name three advantages to using double
(or float
) instead of decimal
:
- Uses less memory.
- Faster because floating point math operations are natively supported by processors.
- Can represent a larger range of numbers.
But these advantages seem to apply only to calculation intensive operations, such as those found in modeling software. Of course, doubles should not be used when precision is required, such as financial calculations. So are there any practical reasons to ever choose double
(or float
) instead of decimal
in "normal" applications?
Edited to add: Thanks for all the great responses, I learned from them.
One further question: A few people made the point that doubles can more precisely represent real numbers. When declared I would think that they usually more accurately represent them as well. But is it a true statement that the accuracy may decrease (sometimes significantly) when floating point operations are performed?
I think you've summarised the advantages quite well. You are however missing one point. The
decimal
type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, thedouble
type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always usedouble
unless you need thebase 10
accuracy thatdecimal
offers.Edit:
Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:
In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).
For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.
In some Accounting, consider the possibility of using integral types instead or in conjunction. For example, let say that the rules you operate under require every calculation result carry forward with at least 6 decimal places and the final result will be rounded to the nearest penny.
A calculation of 1/6th of $100 yields $16.66666666666666..., so the value carried forth in a worksheet will be $16.666667. Both double and decimal should yield that result accurately to 6 decimal places. However, we can avoid any cumulative error by carrying the result forward as an integer 16666667. Each subsequent calculation can be made with the same precision and carried forward similarly. Continuing the example, I calculate Texas sales tax on that amount (16666667 * .0825 = 1375000). Adding the two (it's a short worksheet) 1666667 + 1375000 = 18041667. Moving the decimal point back in gives us 18.041667, or $18.04.
While this short example wouldn't yield a cumulative error using double or decimal, it's fairly easy to show cases where simply calculating the double or decimal and carrying forward would accumulate significant error. If the rules you operate under require a limited number of decimal places, storing each value as an integer by multiplying by 10^(required # of decimal place), and then dividing by 10^(required # of decimal places) to get the actual value will avoid any cumulative error.
In situations where fractions of pennies do not occur (for example, a vending machine), there is no reason to use non-integral types at all. Simply think of it as counting pennies, not dollars. I have seen code where every calculation involved only whole pennies, yet use of double led to errors! Integer only math removed the issue. So my unconventional answer is, when possible, forgo both double and decimal.
Use floating points if you value performance over correctness.
Note: this post is based on information of the decimal type's capabilities from http://csharpindepth.com/Articles/General/Decimal.aspx and my own interpretation of what that means. I will assume Double is normal IEEE double precision.
Note2: smallest and largest in this post reffer to the magnitude of the number.
Pros of "decimal".
Cons of decimal
My opinion is that you should default to using "decimal" for money work and other cases where matching human calculation exactly is important and that you should use use double as your default choice the rest of the time.
Use a double or a float when you don't need precision, for example, in a platformer game I wrote, I used a float to store the player velocities. Obviously I don't need super precision here because I eventually round to an Int for drawing on the screen.
Decimal has wider bytes, double is natively supported by CPU. Decimal is base-10, so a decimal-to-double conversion is happening while a decimal is computed.
Keep in mind .NET CLR only supports Math.Pow(double,double). Decimal is not supported.
.NET Framework 4