In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004
. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006
?
相关问题
- Is there a limit to how many levels you can nest i
- How to toggle on Order in ReactJS
- void before promise syntax
- Keeping track of variable instances
- Can php detect if javascript is on or not?
The default rule for JavaScript when converting a
Number
value to a decimal numeral is to use just enough digits to distinguish theNumber
value. (You can request more or fewer digits by using thetoPrecision
method.)JavaScript uses IEEE-754 basic 64-bit binary floating-point for its
Number
type. Using IEEE-754, the result of.1 + .2
is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:Number
type.Number
type.Number
type.When formatting this
Number
value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:0.299999999999999988897769753748434595763683319091796875
,0.3000000000000000444089209850062616169452667236328125
, and0.300000000000000099920072216264088638126850128173828125
.If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible
Number
value the original value was.This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a
Number
value m to a decimal numeral for theToString
operation:The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the
Number
value that is the result of converting the mathematical value s × 10n‐k to theNumber
type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to theNumber
type, produce the original number m.