My program during calculation can generate nan
or -nan
values.
I check if the values are nan
/-nan
using isnan
method.
I also have to distinguish if the nan value is positive or negative (nan
or -nan
). How can I do this?
Added:I need crossplatform solution for WIN and for Unix/Linux
Try signbit
from <math.h>
:
Description
signbit() is a generic macro which can work on all real floating-point
types. It returns a nonzero value if the value of x has its sign bit
set.
...
NaNs and infinities have a sign bit.
It's apparently part of C99 and POSIX.1-2001, but you could write a macro/function yourself if you don't want to use/conform to either of the two.
Nearly all systems today use either IEEE single or double precision floating-point. So in that case you could (bitwise) convert it to an integer and read the sign-bit.
Here's one approach that uses unions. Although it's not fully standard-compliant, it should still work on nearly all systems.
union{
double f;
uint64_t i;
} x;
x.f = ... // Your floating-point value (can be NaN)
// Check the sign bit.
if ((x.i & 0x8000000000000000ull) == 0){
// positive
}else{
// negative
}
You could use the copysign
function (C99, in <math.h>
);
double sign = copysign(1.0, your_nan);
From C99 §7.12.11.1:
Description
The copysign functions produce a value with the magnitude of x and the sign of y.
They produce a NaN (with the sign of y) if x is a NaN. On implementations that
represent a signed zero but do not treat negative zero consistently in arithmetic
operations, the copysign functions regard the sign of zero as positive.
Returns
The copysign functions return a value with the magnitude of x and the sign of y.