ECMAScript 6's Number.MAX_SAFE_INTEGER
supposedly represents the maximum numerical value JavaScript can store before issues arise with floating point precision. However it's a requirement that the number 1 added to this value must also be representable as a Number
.
Number.MAX_SAFE_INTEGER
NOTE The value of
Number.MAX_SAFE_INTEGER
is the largest integern
such thatn
andn + 1
are both exactly representable as aNumber
value.The value of
Number.MAX_SAFE_INTEGER
is9007199254740991 (2^53−1)
.– ECMAScript Language Specification
The JavaScript consoles of Chrome, Firefox, Opera and IE11 can all safely perform calculations with the number 9,007,199,254,740,992. Some tests:
// Valid
Math.pow(2, 53) // 9007199254740992
9007199254740991 + 1 // 9007199254740992
9007199254740992 - 1 // 9007199254740991
9007199254740992 / 2 // 4503599627370496
4503599627370496 * 2 // 9007199254740992
parseInt('20000000000000', 16) // 9007199254740992
parseInt('80000000000', 32) // 9007199254740992
9007199254740992 - 9007199254740992 // 0
9007199254740992 == 9007199254740991 // false
9007199254740992 == 9007199254740992 // true
// Erroneous
9007199254740992 + 1 // 9007199254740992
9007199254740993 + "" // "9007199254740992"
9007199254740992 == 9007199254740993 // true
Why is it a requirement that n + 1
must also be representable as a Number
? Why does failing this make the value unsafe?