How does a C compiler interpret the "L" which denotes a long integer literal, in light of automatic conversion? The following code, when run on a 32-bit platform (32-bit long, 64-bit long long), seems to cast the expression "(0xffffffffL)" into the 64-bit integer 4294967295, not 32-bit -1.
Sample code:
#include <stdio.h>
int main(void)
{
long long x = 10;
long long y = (0xffffffffL);
long long z = (long)(0xffffffffL);
printf("long long x == %lld\n", x);
printf("long long y == %lld\n", y);
printf("long long z == %lld\n", z);
printf("0xffffffffL == %ld\n", 0xffffffffL);
if (x > (long)(0xffffffffL))
printf("x > (long)(0xffffffffL)\n");
else
printf("x <= (long)(0xffffffffL)\n");
if (x > (0xffffffffL))
printf("x > (0xffffffffL)\n");
else
printf("x <= (0xffffffffL)\n");
return 0;
}
Output (compiled with GCC 4.5.3 on a 32-bit Debian):
long long x == 10
long long y == 4294967295
long long z == -1
0xffffffffL == -1
x > (long)(0xffffffffL)
x <= (0xffffffffL)