Range of floating point numbers in .NET?

2020-04-10 14:59发布

Excerpt from a book:

A float value consists of a 24-bit signed mantissa and an 8-bit signed exponent. The precision is approximately seven decimal digits. Values range from -3.402823 × 10^38 to 3.402823 × 10^38

How to calculate this range? Can someone explain the binary arithmetic?

标签: c# .net
2条回答
乱世女痞
2楼-- · 2020-04-10 15:23

You need to read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" which will explain how floating point numbers are stored, which will also answer your question.

查看更多
太酷不给撩
3楼-- · 2020-04-10 15:30

I would definitely read the article to which Richard points. But if you need a simpler explanation, I hope this helps:

Basically, as you said, there is 1 sign bit, 8 bits for exponent, and 23 for fraction. Then, using this equation (from Wikipedia)

N = (1 - 2s) * 2^(x-127) * (1 + m*2^-23)

where s is the sign bit, x is the exponent (minus the 127 bias), and m is the fractional part treated as a whole number (the equation above transforms the whole number into the appropriate fraction value).

Note, that the exponent value of 0xFF is reserved to represent infinity. So the largest exponent of a real value is 0xFE.

you see that the maximum value is

N = (1 - 2*0) * 2^(254-127) * (1 + (2^23 - 1) * 2^-23)

N = 1 * 2^127 * 1.999999

N = 3.4 x 10^34

The minimum value would be the same but with the sign bit set, which would simply negate the value to give you -3.4 X 10^34.

Q.E.D.

查看更多
登录 后发表回答