The IEEE 754 standard defines the square root of negative zero as negative zero. This choice is easy enough to rationalize, but other choices, such as defining sqrt(-0.0)
as NaN
, can be rationalized too and are easier to implement in hardware. If the fear was that programmers would write if (x >= 0.0) then sqrt(x) else 0.0
and be bitten by this expression evaluating to NaN
when x
is -0.0
, then sqrt(-0.0)
could have been defined as +0.0
(actually, for this particular expression, the results would be even more consistent).
Is there a numerical algorithm in particular where having sqrt(-0.0)
defined as -0.0
simplifies the logic of the algorithm itself?