Does double have a greater range than long?

2019-02-05 20:53发布

In an article on MSDN, it states that the double data type has a range of "-1.79769313486232e308 .. 1.79769313486232e308". Whereas the long data type only has a range of "-9,223,372,036,854,775,808 .. 9,223,372,036,854,775,807". How can a double hold so much more data than a long if they are both 64 bits in size?

http://msdn.microsoft.com/en-us/library/cs7y5x0x(v=vs.90).aspx

标签: c# types size
3条回答
再贱就再见
2楼-- · 2019-02-05 21:14

long is a signed 64-bit integer value and double is a 64-bit floating point value. Looking at their FCL types might make more sense. long maps to System.Int64 and double maps to System.Double.

查看更多
萌系小妹纸
3楼-- · 2019-02-05 21:15

The number of possible doubles, and the number of possible longs is the same, they are just distributed differently*.

The longs are uniformly distributed, while the floats are not. You can Read more here.

I'd write more, but for some reason the cursor is jumping around all over the place on my phone.

Edit: This might actually be more helpful: http://en.wikipedia.org/wiki/Double-precision_floating-point_format#section_1

Edit2: and this is even better: http://blogs.msdn.com/b/dwayneneed/archive/2010/05/07/fun-with-floating-point.aspx

* According to that link, it would seem that there are actually more longs, since some doubles are lost due to the way NaNs and other special numbers are represented.

查看更多
我只想做你的唯一
4楼-- · 2019-02-05 21:16

A simple answer is that double is only accurate to 15-16 total digits, as opposed to long which (as an integer type) has an absolute accuracy within an explicit digit limit, in this case 19 digits. (Keep in mind that digits and values are semantically different.)

double: -/+ 0.000,000,000,000,01 to +/- 99,999,999,999,999.9 (at 100% accuracy, with a loss in accuracy starting from 16th digit, as represented in "-1.79769313486232e308 .. 1.79769313486232e308".)

long: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807

ulong: 0 to 18,446,744,073,709,551,615 (1 more digit than long, but identical value range (since it's only been shifted to exclude negative returns).

In general, int-type real numbers are preferred over floating-point decimal values, unless you explicitly need a decimal representation (for whichever purpose).


In addition, you may know that signed are preferred over unsigned, since the former is much less bug-prone (consider the statement uint i;, then i - x; where x > i).

查看更多
登录 后发表回答