Lately I've written a project in Java and noticed a very strange feature with double/Double implementation. The double type in Java has two 0's, i.e. 0.0 and -0.0 (signed zero's). The strange thing is that:
0.0 == -0.0
evaluates to true
, but:
new Double(0.0).equals(new Double(-0.0))
evaluates to false
. Does anyone know the reason behind this?
It is all explained in the javadoc:
Now you might ask why
0.0 == -0.0
is true. In fact they are not strictly identical. For example:is false. However, the JLS requires ("in accordance with the rules of the IEEE 754 standard") that:
hence
0.0 == -0.0
is true.It important to undertand the use of signed zero in the Double class. (Loads of experienced Java programmers don't).
The short answer is that (by definition) "-0.0 is less than 0.0" in all the methods provided by the Double class (that is, equals(), compare(), compareTo(), etc)
Double allows all floating point numbers to be "totally ordered on a number line". Primitives behave the way a user will think of things (a real world definition) ... 0d = -0d
The following snippets illustrate the behaviour ...
There are other posts that are relevant and nicely explain the background ...
1: Why do floating-point numbers have signed zeros?
2: Why is Java's Double.compare(double, double) implemented the way it is?
And a word of caution ...
If you don't know that, in the Double class, "-0.0 is less than 0.0", you may get caught out when using methods like equals() and compare() and compareTo() from Double in logic tests. For example, look at ...
and for equals you might try ... new Double(d3).equals(0d) || new Double(d3).equals(-0d)
By using == statement you are comparing values. With equals your are comparing objects.