Why do comparisons of NaN values behave differently from all other values? That is, all comparisons with the operators ==, <=, >=, <, > where one or both values is NaN returns false, contrary to the behaviour of all other values.
I suppose this simplifies numerical computations in some way, but I couldn't find an explicitly stated reason, not even in the Lecture Notes on the Status of IEEE 754 by Kahan which discusses other design decisions in detail.
This deviant behavior is causing trouble when doing simple data processing. For example, when sorting a list of records w.r.t. some real-valued field in a C program I need to write extra code to handle NaN as the maximal element, otherwise the sort algorithm could become confused.
Edit: The answers so far all argue that it is meaningless to compare NaNs.
I agree, but that doesn't mean that the correct answer is false, rather it would be a Not-a-Boolean (NaB), which fortunately doesn't exist.
So the choice of returning true or false for comparisons is in my view arbitrary, and for general data processing it would be advantageous if it obeyed the usual laws (reflexivity of ==, trichotomy of <, ==, >), lest data structures which rely on these laws become confused.
So I'm asking for some concrete advantage of breaking these laws, not just philosophical reasoning.
Edit 2: I think I understand now why making NaN maximal would be a bad idea, it would mess up the computation of upper limits.
NaN != NaN might be desirable to avoid detecting convergence in a loop such as
while (x != oldX) {
oldX = x;
x = better_approximation(x);
}
which however should better be written by comparing the absolute difference with a small limit. So IMHO this is a relatively weak argument for breaking reflexivity at NaN.
Very short answer:
Because the following:
nan / nan = 1
must NOT hold. Otherwiseinf/inf
would be 1.(Therefore
nan
can not be equal tonan
. As for>
or<
, ifnan
would respect any order relation in a set satisfying the Archimedean property, we would have againnan / nan = 1
at the limit).From the wikipedia article on NaN, the following practices may cause NaNs:
Since there is no way to know which of these operations created the NaN, there is no way to compare them that makes sense.
NaN can be thought of as an undefined state/number. similar to the concept of 0/0 being undefined or sqrt(-3) (in the real number system where the floating point lives).
NaN is used as a sort of placeholder for this undefined state. Mathematically speaking, undefined is not equal to undefined. Neither can you say an undefined value is greater or less than another undefined value. Therefore all comparisons return false.
This behaviour is also advantageous in the cases where you compare sqrt(-3) to sqrt(-2). They would both return NaN but they are not equivalent even though they return the same value. Therefore having equality always returning false when dealing with NaN is the desired behaviour.
I don't know the design rationale, but here's an excerpt from the IEEE 754-1985 standard:
"It shall be possible to compare floating-point numbers in all supported formats, even if the operands' formats differ. Comparisons are exact and never overflow nor underflow. Four mutually exclusive relations are possible: less than, equal, greater than, and unordered. The last case arises when at least one operand is NaN. Every NaN shall compare unordered with everything, including itself."
Because mathematics is the field where numbers "just exist". In computing you must initialize those numbers and keep their state according to your needs. At those old days memory initialization worked in the ways you could never rely on. You never could allow yourself to think about this "oh, that would be initialized with 0xCD all the time, my algo will not broke".
So you need proper non-mixing solvent which is sticky enough to not not letting your algorithm getting sucked into and broken. Good algorithms involving numbers are mostly going to work with relations, and those if() relations will be omitted.
This is just grease which you can put into new variable at creation, instead of programming random hell from computer memory. And your algorithm whatever it is, will not break.
Next, when you still suddenly finding out that your algorithm is producing NaNs, it is possible to clean it out, looking into every branch one at a time. Again, "always false" rule is helping a lot in this.
For me, the easiest way to explain it is:
You can't compare NaN with something else (even itself) because it does not have a value. Also it can be any value (except a number).