The IEEE 754 standard defines the square root of negative zero as negative zero. This choice is easy enough to rationalize, but other choices, such as defining sqrt(-0.0)
as NaN
, can be rationalized too and are easier to implement in hardware. If the fear was that programmers would write if (x >= 0.0) then sqrt(x) else 0.0
and be bitten by this expression evaluating to NaN
when x
is -0.0
, then sqrt(-0.0)
could have been defined as +0.0
(actually, for this particular expression, the results would be even more consistent).
Is there a numerical algorithm in particular where having sqrt(-0.0)
defined as -0.0
simplifies the logic of the algorithm itself?
The only mathematically reasonable result is 0. There is a reasonable question of whether it should be +0 or -0. For most computations it makes no difference at all, but there are some specific complex expressions for which the result makes more sense under the -0 convention. The exact details are outside the scope of this site, but that's the gist of it.
I may explain some more when I'm not on vacation, if someone else doesn't beat me to it.