I have the following bit of code, however when compiling it with GCC 4.4 with various optimization flags I get some unexpected results when its run.
#include <iostream>
int main()
{
const unsigned int cnt = 10;
double lst[cnt] = { 0.0 };
const double v[4] = { 131.313, 737.373, 979.797, 731.137 };
for(unsigned int i = 0; i < cnt; ++i) {
lst[i] = v[i % 4] * i;
}
for(unsigned int i = 0; i < cnt; ++i) {
double d = v[i % 4] * i;
if(lst[i] != d) {
std::cout << "error @ : " << i << std::endl;
return 1;
}
}
return 0;
}
when compiled with: "g++ -pedantic -Wall -Werror -O1 -o test test.cpp" I get the following output: "error @ : 3"
when compiled with: "g++ -pedantic -Wall -Werror -O2 -o test test.cpp" I get the following output: "error @ : 3"
when compiled with: "g++ -pedantic -Wall -Werror -O3 -o test test.cpp" I get no errors
when compiled with: "g++ -pedantic -Wall -Werror -o test test.cpp" I get no errors
I do not believe this to be an issue related to rounding, or epsilon difference in the comparison. I've tried this with Intel v10 and MSVC 9.0 and they all seem to work as expected. I believe this should be nothing more than a bitwise compare.
If I replace the if-statement with the following: if (static_cast<long long int>(lst[i]) != static_cast<long long int>(d))
, and add "-Wno-long-long" I get no errors in any of the optimization modes when run.
If I add std::cout << d << std::endl;
before the "return 1", I get no errors in any of the optimization modes when run.
Is this a bug in my code, or is there something wrong with GCC and the way it handles the double type?
Note: I've just tried this with gcc versions 4.3 and 3.3, the error is not exhibited.
Resolution: Mike Dinsdale noted the following bug report: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323 It seems the GCC team are not completely sure about nature of problem.
As suggested in the bug report a possible resolution is to use the ffloat-store option. I've tried this and it works, however the results from a performance point of view are not that great, though ymmv.
The problem is likely the result of losing some precision when storing the result of an expression vs. the compiler not doing so in a local as an optimization:
The C99 standard says in 6.3.1.8/2 "Usual arithmetic conversions":
The fact that the result depends on the optimization settings suggests it might be the x87 extended precision messing with things (as Michael Burr says).
Here's some code I use (with gcc on x86 processors) to switch the extended precision off:
Or you can just run your code with valgrind, which doesn't simulate the 80-bit registers, and is probably easier for a short program like this!
The width of the floating point registers in x86 is different from the width of the
double
in RAM. Therefore comparisons may succeed or fail depending entirely on how the compiler decides to optimize the loads of floating point values.