How big is the precision loss converting long to d

2020-03-12 03:11发布

问题:

I have read in different post on stackoverflow and in the C# documentation, that converting long (or any other data type representing a number) to double loses precision. This is quite obvious due to the representation of floating point numbers.

My question is, how big is the loss of precision if I convert a larger number to double? Do I have to expect differences larger than +/- X ?

The reason I would like to know this, is that I have to deal with a continuous counter which is a long. This value is read by my application as string, needs to be cast and has to be divided by e.g. 10 or some other small number and is then processed further.

Would decimal be more appropriate for this task?

回答1:

converting long (or any other data type representing a number) to double loses precision. This is quite obvious due to the representation of floating point numbers.

This is less obvious than it seems, because precision loss depends on the value of long. For values between -252 and 252 there is no precision loss at all.

How big is the loss of precision if I convert a larger number to double? Do I have to expect differences larger than +/- X

For numbers with magnitude above 252 you will experience some precision loss, depending on how much above the 52-bit limit you go. If the absolute value of your long fits in, say, 58 bits, then the magnitude of your precision loss will be 58-52=6 bits, or +/-64.

Would decimal be more appropriate for this task?

decimal has a different representation than double, and it uses a different base. Since you are planning to divide your number by "small numbers", different representations would give you different errors on division. Specifically, double will be better at handling division by powers of two (2, 4, 8, 16, etc.) because such division can be accomplished by subtracting from exponent, without touching the mantissa. Similarly, large decimals would suffer no loss of significant digits when divided by ten, hundred, etc.



回答2:

long

long is a 64-bit integer type and can hold values from –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 (max. 19 digits).

double

double is 64-bit floating-point type that has precision of 15 to 16 digits. So data can certainly be lost in case your numbers are greater than ~100,000,000,000,000.

decimal

decimal is a 128-bit decimal type and can hold up to 28-29 digits. So it's always safe to cast long to decimal.

Recommendation

I would advice that you find out the exact expectations about the numbers you will be working with. Then you can take an informed decision in choosing the appropriate data type. Since you are reading your numbers from a string, isn't it possible that they will be even greater than 28 digits? In that case, none of the types listed will work for you, and instead you'll have to use some sort of a BigInt implementation.