#include<stdio.h>
#include<stdlib.h>
int main(void)
{
int x, *ptr_x;
float f , *ptr_f;
ptr_f = &f;
ptr_x = &x;
*ptr_x = 5;
*ptr_f = 1.5; //printf("%d %f\n", f,x);
printf ("\n\nxd = %d \t xf = %f \n ff = %f \t fd = %d", x,x,f,f);
return 0;
}
The output for ff = %f is not expected.
xd = 5 xf = 0.000000
ff = 0.000000 fd = 1073217536
The point of the this code is to show what would happen if a floating value is printed with %d and if a int value is printed %f.
Why is the float value not being printed properly even if i use %f ?
printf() is not typesafe.
The arguments that you pass to printf()
are treated according to what you promise the compiler.
Also, float
s are promoted to double
s when passed through variadic arguments.
So when you promise the compiler %f
the first time (for xf
), the compiler gobbles up an entire double
(usually 8 byte) from the arguments, swallowing your float in the process. Then the second %f
cuts right into the zero mantissa of the second double.
Here's a picture of your arguments:
+-0-1-2-3-+-0-1-2-3-+-0-1-2-3-4-5-6-7-+-0-1-2-3-4-5-6-7-+
| x | x | f | f |
+---------+---------+-----------------+-----------------+
%d--------|%f----------------|%f---------------|%d------|
But f
looks like this (having been promoted to double
):
f = 3FF8000000000000
Let's draw it again with values, and speculating about your machine endianness:
| 05000000 | 05000000 | 00000000 0000F83F | 00000000 0000F83F |
| %d, OK | %f, denormal... | %f, denormal... | %d, OK |
Note that 1073217536 is 0x3FF80000.
Once you pass at least one invalid format specifier to printf
(like attempt to print a float
value with %d
or an int
value with %f
) your entire program gets screwed up beyond repair. The consequences of such destructive action can be seen anywhere in the program. In your case an attempt to print something with an invalid format specifier resulted in that even the valid format specifiers stopped working.
Speaking formally, you wrote a program that exhibits undefined behavior. It can act absolutely unpredictably. You said it yourself
The point of the this code is to show what would happen if a floating
value is printed with %d and if a int value is printed %f.
The broken behavior that you observe demonstrates exactly that! A bizarrely and unpredictably acting program is exactly what happens when you attempt to do something like that.
Try this:
printf("size of int = %d, size of float = %d, size of double = %d\n",
sizeof(int), sizeof(float), sizeof(double));
When you call printf()
, the system pushes the arguments onto the stack. So the stack looks something like this:
pointer to format string [probably 4 bytes]
x [probably 4 bytes]
x [probably 4 bytes]
f [probably 6 or 8 bytes]
f [probably 6 or 8 bytes]
Then printf()
pops bytes off the stack as it parses the format string. When it sees %d
it pops enough bytes for an int, and when it sees %f
it pops enough bytes for a float. (Actually, floats are promoted to doubles when they're passed as function arguments, but the important idea is that they require more bytes than ints.) So if you "lie" about the arguments it will pop the wrong number of bytes and blindly convert them according to your instructions.
So it will first pop the correct number of bytes for xd
because you've correctly told it that x is an int.
But then it will pop enough bytes for a float, which will consume the second x
and part of the first f
from the stack, and interpret them as a float for xf
.
Then it will pop off enough bytes for another float, which will consume the remainder of the first f
and part of the second f
, and interpret them as a float for ff
.
Finally, it will pop off enough bytes for an int, which will consume the remainder of the second f
, and interpret them as an int for fd
.
Hope that helps.