decimal vs double! - Which one should I use and wh

2018-12-31 02:20发布

This question already has an answer here:

I keep seeing people using doubles in C#. I know I read somewhere that doubles sometimes lose precision. My question is when should a use a double and when should I use a decimal type? Which type is suitable for money computations? (ie. greater than $100 million)

7条回答
初与友歌
2楼-- · 2018-12-31 02:28

My question is when should a use a double and when should I use a decimal type?

decimal for when you work with values in the range of 10^(+/-28) and where you have expectations about the behaviour based on base 10 representations - basically money.

double for when you need relative accuracy (i.e. losing precision in the trailing digits on large values is not a problem) across wildly different magnitudes - double covers more than 10^(+/-300). Scientific calculations are the best example here.

which type is suitable for money computations?

decimal, decimal, decimal

Accept no substitutes.

The most important factor is that double, being implemented as a binary fraction, cannot accurately represent many decimal fractions (like 0.1) at all and its overall number of digits is smaller since it is 64-bit wide vs. 128-bit for decimal. Finally, financial applications often have to follow specific rounding modes (sometimes mandated by law). decimal supports these; double does not.

查看更多
无色无味的生活
3楼-- · 2018-12-31 02:35

Decimal is for exact values. Double is for approximate values.

USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
查看更多
妖精总统
4楼-- · 2018-12-31 02:45

For money, always decimal. It's why it was created.

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.

If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".

查看更多
只靠听说
5楼-- · 2018-12-31 02:50

I think that the main difference beside bit width is that decimal has exponent base 10 and double has 2

http://software-product-development.blogspot.com/2008/07/net-double-vs-decimal.html

查看更多
姐姐魅力值爆表
6楼-- · 2018-12-31 02:51

System.Single / float - 7 digits
System.Double / double - 15-16 digits
System.Decimal / decimal - 28-29 significant digits

The way I've been stung by using the wrong type (a good few years ago) is with large amounts:

  • £520,532.52 - 8 digits
  • £1,323,523.12 - 9 digits

You run out at 1 million for a float.

A 15 digit monetary value:

  • £1,234,567,890,123.45

9 trillion with a double. But with division and comparisons it's more complicated (I'm definitely no expert in floating point and irrational numbers - see Marc's point). Mixing decimals and doubles causes issues:

A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.

When should I use double instead of decimal? has some similar and more in depth answers.

Using double instead of decimal for monetary applications is a micro-optimization - that's the simplest way I look at it.

查看更多
临风纵饮
7楼-- · 2018-12-31 02:51

For money: decimal. It costs a little more memory, but doesn't have rounding troubles like double sometimes has.

查看更多
登录 后发表回答