My code is below:
int main(int argc, char *argv[])
{
double f = 18.40;
printf("%d\n", (int)(10 * f));
return 0;
}
The result is 184 in VC6.0, while the result in Codeblock is 183. Why?
My code is below:
int main(int argc, char *argv[])
{
double f = 18.40;
printf("%d\n", (int)(10 * f));
return 0;
}
The result is 184 in VC6.0, while the result in Codeblock is 183. Why?
The reason for this is that GCC tries to make the code backward compatible with older architectures of the CPU as much as possible, while MSVC tries to take advantage of the newer futures of the architecture.
The code generated by MSVC multiplies the two numbers, 10.0 × 18.40:
.text:00401006 fld ds:dbl_40D168
.text:0040100C fstp [ebp+var_8]
.text:0040100F fld ds:dbl_40D160
.text:00401015 fmul [ebp+var_8]
.text:00401018 call __ftol2_sse
and then call a function named __ftol2_sse
, inside this function it converts the result to integer using some instruction named cvttsd2si
:
.text:00401189 push ebp
.text:0040118A mov ebp, esp
.text:0040118C sub esp, 8
.text:0040118F and esp, 0FFFFFFF8h
.text:00401192 fstp [esp+0Ch+var_C]
.text:00401195 cvttsd2si eax, [esp+0Ch+var_C]
.text:0040119A leave
.text:0040119B retn
This instruction, cvttsd2si
, is according to this page:
Convert scalar double-precision floating-point value (with truncation) to signed doubleword of quadword integer (SSE2)
it basically converts the double into integer. This instruction is part of instruction set called SSE2 which is introduced with Intel Pentium 4.
GCC doesn't uses this instructions set by default and tries to do it with the available instructions from i386:
fldl 0x28(%esp)
fldl 0x403070
fmulp %st,%st(1)
fnstcw 0x1e(%esp)
mov 0x1e(%esp),%ax
mov $0xc,%ah
mov %ax,0x1c(%esp)
fldcw 0x1c(%esp)
fistpl 0x18(%esp)
fldcw 0x1e(%esp)
mov 0x18(%esp),%eax
mov %eax,0x4(%esp)
movl $0x403068,(%esp)
call 0x401b44 <printf>
mov $0x0,%eax
if you want GCC to use cvttsd2si
you need to tell it to use the futures available from SSE2 by compiling with the flag -msse2
, but also this means that some people who still using older computers won't be able to run this program. See here Intel 386 and AMD x86-64 Options for more options.
So after compiling with -msse2
it will use cvttsd2si
to convert the result to 32 bit integer:
0x004013ac <+32>: movsd 0x18(%esp),%xmm1
0x004013b2 <+38>: movsd 0x403070,%xmm0
0x004013ba <+46>: mulsd %xmm1,%xmm0
0x004013be <+50>: cvttsd2si %xmm0,%eax
0x004013c2 <+54>: mov %eax,0x4(%esp)
0x004013c6 <+58>: movl $0x403068,(%esp)
0x004013cd <+65>: call 0x401b30 <printf>
0x004013d2 <+70>: mov $0x0,%eax
now both MSVC and GCC should give the same number:
> type test.c
#include <stdio.h>
int main(int argc, char *argv[])
{
double f = 18.40;
printf("%d\n", (int) (10.0 * f));
return 0;
}
> gcc -Wall test.c -o gcctest.exe -msse2
> cl test.c /W3 /link /out:msvctest.exe
> gcctest.exe
184
> msvctest.exe
184
>
Codeblocks compiler has probably something like 18.39999999999 as the floating point value. I think you should round if you want a consistent result.
The point is, that 0.4
is 2/5
. Fractions with anything but a power of two in the denominator are not exactly representable in floating point numbers, much like 1/3 is not exactly representable as a decimal number. Thus, your compiler has to choose a nearby, representable number, with the result that 10*18.4
is not precisely 184
, but 183.999
...
Now, everything depends on the rounding mode employed when your float is converted to an integer. With round to nearest or round to infinity, you get 184
, with round to zero or round to minus infinity you get 183
.
Floating point calculations are implemented differently by different compilers and different architectures. Even the same compiler can have different modes of operation that will yield different results.
For example, if I take your program and my installation of gcc (MinGW, 4.6.2) and compile like this:
gcc main.c
then the output is, as your report, 183.
However, if I compile like this:
gcc main.c -ffloat-store
then the output is 184.
If you really want to understand the differences you need to specify precise compiler versions, and specify which options you are passing to the compiler.
More fundamentally, you should be aware that the value 18.4
cannot be represented exactly as a binary floating point value. The closest representable double precision value to 18.4
is:
18.39999 99999 99998 57891 45284 79799 62825 77514 64843 75
So I suspect that you are reasoning that the correct output from your program is 184
. But I suspect that reasoning is flawed and fails to account for representability, rounding, etc. issues.