Printing after typecasting with %d or %i gives une

2019-09-26 04:40发布

问题:

I am rounding off some values and then printing them. When I use %f option, they are printed correctly, but using the %d or %i option (even after casting the rounded values to int) is giving a weird output, and I am not able to figure the why of it out.

Any help is much appreciated!

When I use %f:

i = 0;

while(i < n_shapes)
{
    ll_x[i] = (int)round((ll_x[i] - min_x)/pitch_x);
    ll_y[i] = (int)round((ll_y[i] - min_y)/pitch_y);
    ur_x[i] = (int)round((ur_x[i] - min_x)/pitch_x);
    ur_y[i] = (int)round((ur_y[i] - min_y)/pitch_y);
    printf("%f,%f,%f,%f\n", ll_x[i], ll_y[i], ur_x[i], ur_y[i]);
    i++;
}

Output:

115.000000,94.000000,115.000000,101.000000
116.000000,51.000000,117.000000,58.000000
116.000000,60.000000,117.000000,67.000000
116.000000,69.000000,117.000000,75.000000
116.000000,77.000000,117.000000,84.000000
116.000000,86.000000,117.000000,93.000000
116.000000,94.000000,117.000000,101.000000

Now, with %d (or %i):

i = 0;

while(i < n_shapes)
{
    ll_x[i] = (int)round((ll_x[i] - min_x)/pitch_x);
    ll_y[i] = (int)round((ll_y[i] - min_y)/pitch_y);
    ur_x[i] = (int)round((ur_x[i] - min_x)/pitch_x);
    ur_y[i] = (int)round((ur_y[i] - min_y)/pitch_y);
    printf("%d,%d,%d,%d\n", ll_x[i], ll_y[i], ur_x[i], ur_y[i]);
    i++;
}

Output:

1079590912,0,6,-1
1078788096,0,5,-1
1079033856,0,6,-1
1079164928,0,6,-1
1079312384,0,6,-1
1079459840,0,6,-1
1079590912,0,6,-1

Thank you!

Edit: Yes, I realize that using (int) in the printf gives me the right output. I was curious about the values I got when I didn't do so. What does my output when I use %d without casting inside the printf mean?

回答1:

This is undefined behavior. You need to use the correct type specifier.

printf cannot verify that the types of parameters that you pass to it for printing match their corresponding format specifiers. The compiler performs type-specific conversions before passing these parameters, so printf expects that for each %f if would find a double (float gets converted to double as well) and for each %d it would find an int. Your code passes a double-converted value for that %d specifier, which causes undefined behavior.

Note that casting a float or a double expression to int before assigning to a float or double variable does not change the representation of the number. All it does is truncating the fractional part. The representation remains the same. In other words, if you do

double x = 12.345;
double y = (int)x;

it is the same as

double x = 12.345;
double y = (double)((int)x);

because in this case the compiler knows the type of variable y, and inserts the missing cast for you.



回答2:

The first thing to learn about gcc is that you need to turn warnings on explicitly and make them errors using -Wall -Wextra -Werror. These are going to warn you about many useful things if you do not do them exactly right.

Including the format string and wrong argument types as you have here.

I guess the reason why these warning options are not enabled by default is that good-old-perfectly-working-K&R-code would now produce warnings and upset some venerable hackers.