How to decide what to use - double or decimal? [du

2020-05-25 07:14发布

Possible Duplicate:
decimal vs double! - Which one should I use and when?

I'm using double type for price in my trading software. I've noticed that sometimes there are a odd errors. They occur if price contains 4 digits after "dot", like 2.1234.

When I sent from my program "2.1234" on the market order appears at the price of "2.1235".

I don't use decimal because I don't need "extreme" precision. I don't need to distinguish for examle "2.00000000003" from "2.00000000002". I need maximum 6 digits after a dot.

The question is - where is the line? When to use decimal?

Should I use decimal for any finansical operations? Even if I need just one digit after the dot? (1.1 1.2 etc.)

I know decimal is pretty slow so I would prefer to use double unless decimal is absolutely required.

标签: c# .net
11条回答
Anthone
2楼-- · 2020-05-25 07:22

As soon as you start to do calculations on doubles you may get unexpected rounding problems because a double uses a binary representation of the number while the decimal uses a decimal representation preserving the decimal digits. That is probably what you are experiencing. If you only serialize and deserialize doubles to text or database without doing any rounding you will actually not loose any precision.

However, decimals are much more suited for representing monetary values where you are concerned about the decimal digits (and not the binary digits that a double uses internally). But if you need to do complex calculations (e.g. integrals as used by actuary computations) you will have to convert the decimal to double before doing the calculation negating the advantages of using decimals.

A decimal also "remembers" how many digits it has, e.g. even though decimal 1.230 is equal to 1.23 the first is still aware of the trailing zero and can display it if formatted as text.

查看更多
我命由我不由天
3楼-- · 2020-05-25 07:28

There is an Explantion of it on MSDN

查看更多
冷血范
4楼-- · 2020-05-25 07:29

When accuracy is needed and important, use decimal.

When accuracy is not that important, then you can use double.

In your case, you should be using decimal, as its financial matter.

查看更多
Bombasti
5楼-- · 2020-05-25 07:32

If it's financial software you should probably use decimal. This wiki article summarises quite nicely.

查看更多
等我变得足够好
6楼-- · 2020-05-25 07:36

Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.

Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.

Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:

  • double has a larger range (it can handle very large and very small magnitudes);
  • decimal has more precision (has more significant digits);
  • you may need to use double to interact with some older APIs that are not aware of decimal;
  • double is faster than decimal;
  • decimal has a larger memory footprint;
查看更多
▲ chillily
7楼-- · 2020-05-25 07:36

A simple response is in this example:

decimal d = 0.3M+0.3M+0.3M;
            bool ret = d == 0.9M; // true
            double db = 0.3 + 0.3 + 0.3;
            bool dret = db == 0.9; // false

the test with the double fails since 0.3 in its binary representation ( base 2 ) is periodic, so you loose precision the decimal is represented by BCD, so base 10, and you did not loose significant digit unexpectedly. The Decimal are unfortunately dramattically slower than double. Usually we use decimal for financial calculation, where any digit has to be considered to avoid tolerance, double/float for engineering.

查看更多
登录 后发表回答