Understanding implicit conversions for printf

2019-03-01 05:30发布

The C99 Standard differentiate between implicit and explicit type conversions (6.3 Conversions). I guess, but could not found, that implicit casts are performed, when the target type is of greater precision than the source, and can represent its value. [That is what I consider to happen from INT to DOUBLE]. Given that, I look at the following example:

#include <stdio.h>  // printf
#include <limits.h> // for INT_MIN
#include <stdint.h> // for endianess
#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)

int main()
{
  printf("sizeof(int): %lu\n", sizeof(int));
  printf("sizeof(float): %lu\n", sizeof(float));
  printf("sizeof(double): %lu\n", sizeof(double));
  printf( IS_BIG_ENDIAN == 1 ? "Big" : "Little" ); printf( " Endian\n" );

  int a = INT_MIN;
  printf("INT_MIN: %i\n", a);
  printf("INT_MIN as double (or float?): %e\n", a);
}

I was very surprised to find that output:

sizeof(int): 4
sizeof(float): 4
sizeof(double): 8
Little Endian
INT_MIN: -2147483648
INT_MIN as double (or float?): 6.916919e-323

So the float value printed is a subnormal floating point number near the very minimal subnormal positive double 4.9406564584124654 × 10^−324. Strange things happen when I comment out the two printf for endianess, I get another value for the double:

#include <stdio.h>  // printf
#include <limits.h> // for INT_MIN
#include <stdint.h> // for endianess
#define IS_BIG_ENDIAN (*(uint16_t *)"\0\xff" < 0x100)

int main()
{
  printf("sizeof(int): %lu\n", sizeof(int));
  printf("sizeof(float): %lu\n", sizeof(float));
  printf("sizeof(double): %lu\n", sizeof(double));
  // printf( IS_BIG_ENDIAN == 1 ? "Big" : "Little" ); printf( " Endian\n" );

  int a = INT_MIN;
  printf("INT_MIN: %i\n", a);
  printf("INT_MIN as double (or float?): %e\n", a);
}

output:

sizeof(int): 4
sizeof(float): 4
sizeof(double): 8
INT_MIN: -2147483648
INT_MIN as double (or float?): 4.940656e-324
  • gcc --version: (Ubuntu 4.8.2-19ubuntu1) 4.8.2
  • uname: x86_64 GNU/Linux
  • compiler options where: gcc -o x x.c -Wall -Wextra -std=c99 --pedantic
  • And yes there where one warning:
x.c: In function ‘main’:
x.c:15:3: warning: format ‘%e’ expects argument of type ‘double’, but argument 2
          has type ‘int’ [-Wformat=]

   printf("INT_MIN as double (or float?): %e\n", a);
   ^

But I still cannot understand what exactly is happening.

  • in little endianess I consider MIN_INT as: 00...0001 and MIN_DBL (Subnormal) as 100..00#, starting with the mantissa, followed by the exponent and conclude with the # as sign bit.
  • Is this form of applying "%e" format specifier on an int, is a implicit cast?, a reinterpret cast?

I am lost, please enlight me.

2条回答
forever°为你锁心
2楼-- · 2019-03-01 05:44

Arguments to va_arg functions are not converted, syntactically the compiler knows nothing about the arguments for such functions, so he can't do that. Modern compilers do know to interpret the format string, though, and so they are able to warn you when something fishy is going on. That's what happening when you see the warning from gcc.

To be more precise, there are some promotions that are done for narrow integer types, they are promoted to int, and for float which is promoted to double. But that is all magic that can happen, here.

In summary, always use the correct format specifier.

BTW, for size_t as of your sizeof expressions the correct one is %zu.

查看更多
何必那么认真
3楼-- · 2019-03-01 06:08
printf("INT_MIN as double (or float?): %e\n", a);

Above line has problem You can not use %e to print ints. The behavior is undefined.

You should use

printf("INT_MIN as double (or float?): %e\n", (double)a);

or

double t = a;
printf("INT_MIN as double (or float?): %e\n", t);

Related post: This post explains how using incorrect print specifiers in printf can lead to UB.

查看更多
登录 后发表回答