I stumbled upon the following example on wikipedia (http://en.wikipedia.org/wiki/Type_conversion#Implicit_type_conversion).
#include <stdio.h>
int main()
{
int i_value = 16777217;
float f_value = 16777217.0;
printf("The integer is: %i\n", i_value); // 16777217
printf("The float is: %f\n", f_value); // 16777216.000000
printf("Their equality: %i\n", i_value == f_value); // result is 0
}
Their explanation: "This odd behavior is caused by an implicit cast of i_value to float when it is compared with f_value; a cast which loses precision, making the values being compared different."
Isn't this wrong? If i_value were cast to float, then both would have the same loss in precision and they would be equal.
So i_value must be cast to double.
No, in the case of the equality operator, the "usual arithmetic conversions" occur, which start off:
- First, if the corresponding real type of either operand is
long double
, the other operand is converted, without change of type
domain, to a type whose corresponding real type is long double
.
- Otherwise, if the corresponding real type of either operand is
double
, the other operand is converted, without change of type
domain, to a type whose corresponding real type is double
.
- Otherwise, if the corresponding real type of either operand is
float
, the other operand is converted, without change of type
domain, to a type whose corresponding real type is float
.
This last case applies here: i_value
is converted to float
.
The reason that you can see an odd result from the comparison, despite this, is because of this caveat to the usual arithmetic conversions:
The values of floating operands and of the results of floating
expressions may be represented in greater precision and range than
that required by the type; the types are not changed thereby.
This is what is happening: the type of the converted i_value
is still float
, but in this expression your compiler is taking advantage of this latitude and representing it in greater precision than float
. This is typical compiler behaviour when compiling for 387-compatible floating point, because the compiler leaves temporary values on the floating point stack, which stores floating point numbers in an 80bit extended precision format.
If your compiler is gcc
, you can disable this additional precision by giving the -ffloat-store
command-line option.
There are some good answers here. You must be very careful converting between various integers and various floating point representations.
I generally don't test floating point numbers for equality, especially if one of them comes from an implicit or explicit cast from an integer type. I work on an application which is full of geometric calculations. As much as possible, we work with normalized integers (by forcing a maximum precision that we will accept in the input data). For cases where you must use floating point, we will apply an absolute value to the difference if comparison is needed.
I believe that the largest integer value that a 32-bit IEEE floating point can hold is 1048576 which is smaller than the number above. So, it is definitely true that the floating point value will not hold exactly 16777217.
The part I'm not sure about is how the compiler makes the comparison between two different types of numbers (i.e. a float and an int). I can think of three different ways this might be done:
1) Convert both values to "float" (this should make the values the same, so this is probably not what the compiler does)
2) Convert both values to "int" (this may or may not show them the same ... converting to an int often truncates, so if the floating point value is 16777216.99999, then converting to an "int" would truncate)
3) Convert both values to "double". My guess would be that this is what the compiler would do. If this is what the compiler does, the two values would definitely be different. A double can hold 16777217 exactly, and it could also exactly represent the float point value that 16777217.0 converts to (which is not exactly 16777217.0).