In an article on MSDN, it states that the double
data type has a range of "-1.79769313486232e308 .. 1.79769313486232e308". Whereas the long
data type only has a range of "-9,223,372,036,854,775,808 .. 9,223,372,036,854,775,807". How can a double
hold so much more data than a long
if they are both 64 bits in size?
http://msdn.microsoft.com/en-us/library/cs7y5x0x(v=vs.90).aspx
long
is a signed 64-bit integer value anddouble
is a 64-bit floating point value. Looking at their FCL types might make more sense.long
maps toSystem.Int64
anddouble
maps toSystem.Double
.The number of possible doubles, and the number of possible longs is the same, they are just distributed differently*.
The longs are uniformly distributed, while the floats are not. You can Read more here.
I'd write more, but for some reason the cursor is jumping around all over the place on my phone.
Edit: This might actually be more helpful: http://en.wikipedia.org/wiki/Double-precision_floating-point_format#section_1
Edit2: and this is even better: http://blogs.msdn.com/b/dwayneneed/archive/2010/05/07/fun-with-floating-point.aspx
* According to that link, it would seem that there are actually more longs, since some doubles are lost due to the way NaNs and other special numbers are represented.
A simple answer is that
double
is only accurate to 15-16 total digits, as opposed tolong
which (as an integer type) has an absolute accuracy within an explicit digit limit, in this case 19 digits. (Keep in mind that digits and values are semantically different.)double
: -/+ 0.000,000,000,000,01 to +/- 99,999,999,999,999.9 (at 100% accuracy, with a loss in accuracy starting from 16th digit, as represented in "-1.79769313486232e308 .. 1.79769313486232e308".)long
: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807ulong
: 0 to 18,446,744,073,709,551,615 (1 more digit than long, but identical value range (since it's only been shifted to exclude negative returns).In general, int-type real numbers are preferred over floating-point decimal values, unless you explicitly need a decimal representation (for whichever purpose).
In addition, you may know that signed are preferred over unsigned, since the former is much less bug-prone (consider the statement
uint i;
, theni - x;
wherex > i
).