I have come across a bug that seems to be platform dependent. I am getting different results for clang++ and g++ however only on my 32-Debian Machine. I was always under the impression that IEEE 754 was standardized and that all compilers that abide by the standard would have the same behavior. Please let me know if I am wrong, I am just very confused about this. Also, I realize that depending on floating point comparison is generally not a good idea.
#define DEBUG(line) std::cout <<"\t\t" << #line << " => " << line << "\n";
#include <iostream>
int main() {
double x = 128.0, y = 255.0;
std::cout << "\n";
DEBUG( x/y)
DEBUG( ((x/y) == 128.0/255.0))
DEBUG( (128.0/255.0) )
DEBUG( ((x/y)-(x/y)))
DEBUG( ((x/y)-(128.0/255.0)) )
DEBUG( ((128.0/255.0)-0.501961) )
std::cout << "\n";
return 0;
}
And here is my output
[~/Desktop/tests]$ g++ float_compare.cc -o fc
[~/Desktop/tests]$./fc
x/y => 0.501961
((x/y) == 128.0/255.0) => 0
(128.0/255.0) => 0.501961
((x/y)-(x/y)) => 0
((x/y)-(128.0/255.0)) => 6.9931e-18
((128.0/255.0)-0.501961) => -2.15686e-07
[~/Desktop/tests]$clang++ float_compare.cc -o fc
[~/Desktop/tests]$./fc
x/y => 0.501961
((x/y) == 128.0/255.0) => 1
(128.0/255.0) => 0.501961
((x/y)-(x/y)) => 0
((x/y)-(128.0/255.0)) => 0
((128.0/255.0)-0.501961) => -2.15686e-07
The Standard allows intermediate results to use extended precision, even in full compliance mode (which many compilers aren't in by default). It says in
[basic.fundamental]
:For your particular situation comparing g++ and clang, see https://gcc.gnu.org/wiki/FloatingPointMath
also
Since extended precision is different between SSE (64-bit) and x87 (80-bit), the results of compile-time computations may depend not only on the compiler and its version, but also what flags the compiler was built with.
The way you know that IEEE 754 rules are not in effect is by checking
numeric_limits<T>::is_iec559
.