Reading through the ECMAScript 5.1 specification, +0
and -0
are distinguished.
Why then does +0 === -0
evaluate to true
?
Reading through the ECMAScript 5.1 specification, +0
and -0
are distinguished.
Why then does +0 === -0
evaluate to true
?
We can use
Object.is
to distinguish +0 and -0, and one more thing,NaN==NaN
.There are two possible values (bit representations) for 0. This is not unique. Especially in floating point numbers this can occur. That is because floating point numbers are actually stored as a kind of formula.
Integers can be stored in separate ways too. You can have a numeric value with an additional sign-bit, so in a 16 bit space, you can store a 15 bit integer value and a sign-bit. In this representation, the value 1000 (hex) and 0000 both are 0, but one of them is +0 and the other is -0.
This could be avoided by subtracting 1 from the integer value so it ranged from -1 to -2^16, but this would be inconvenient.
A more common approach is to store integers in 'two complements', but apparently ECMAscript has chosen not to. In this method numbers range from 0000 to 7FFF positive. Negative numbers start at FFFF (-1) to 8000.
Of course, the same rules apply to larger integers too, but I don't want my F to wear out. ;)
JavaScript uses IEEE 754 standard to represent numbers. From Wikipedia:
The article contains further information about the different representations.
So this is the reason why, technically, both zeros have to be distinguished.
This behaviour is explicitly defined in section 11.9.6, the Strict Equality Comparison Algorithm (emphasis partly mine):
(The same holds for
+0 == -0
btw.)It seems logically to treat
+0
and-0
as equal. Otherwise we would have to take this into account in our code and I, personally, don't want to do that ;)Note:
ES2015 introduces a new comparison method,
Object.is
.Object.is
explicitly distinguishes between-0
and+0
: