Why is 1.0f in C code represented as 1065353216 in

2020-07-06 04:33发布

问题:

In C I have this code block:

if(x==1){
    a[j][i]=1;
}
else{
    a[j][i]=0;
}

a is a matrix of float values, if I try to see the compiled assembly of this code in nasm syntax

the line a[j][i]=0; assignment, was coded in this way

dword [rsi+rdi], 0

but the line a[j][i]=1; assignment, was coded in this way

dword [rsi+rdi], 1065353216

How can 1065353216 represent a 1.0f??

回答1:

Because 1065353216 is the unsigned 32-bit integer representation of the 32-bit floating point value 1.0.

More specifically, 1.0 as a 32-bit float becomes:

0....... ........ ........ ........ sign bit (zero is positive)
.0111111 1....... ........ ........ exponent (127, which means zero)
........ .0000000 00000000 00000000 mantissa (zero, no correction needed)
___________________________________
00111111 10000000 00000000 00000000 result

So the end result is 2^0 + 0, which is 1 + 0, which is 1.

You can use binaryconvert.com or this useful converter to see other values.

As to why 127 suddenly means zero in the exponent: it's actually a pretty clever trick called exponent bias that makes it easier to compare floating-point values. Try out the converter with wildly different values (10, 100, 1000...) and you'll see the the exponent increases as well. Sorting is also the reason the sign bit is the first bit stored.



回答2:

The float is represented in binary32 format. The positive floats go from 0.0f (whose bits when interpreted as integer represent 0) to +inf (whose bits interpreted as integer represent approximately 2000000000).

The number 1.0f is almost exactly halfway between these two extremes. There are approximately as many positive float numbers below it (10-1, 10-2, …) as there are values above it (101, 102, …). For this reason the value of 1.0f when its bits are interpreted as an integer is near 1000000000.



回答3:

You can see the binary representation of the floating number 1.0 with the following lines of code:

#include <stdio.h>
  int main(void) {
  float a = 1.0;
  printf("in hex, this is %08x\n", *((int*)(&a)));
  printf("the int representation is %d\n", *((int*)(&a)));
  return 0;
}

This results in

in hex, this is 3f800000
the int representation is 1065353216

The format of a 32 bit floating point number is given by

1 sign bit      (s)    = 0
8 exponent bits (e)    = 7F = 127
23 mantissa bits (m)   = 0

You add a (implied) 1 in front of the mantissa - in the above case the mantissa is all zeros, and the implied value is

1000 0000 0000 0000 0000 0000

This is 2^23 or 8388608. Now you multiply by (-1)^sign - which is 1 in this case.

Finally, you multiply by 2^(exponent-150). Really, you should express the mantissa as a fraction (1.0000000) and multiply by 2^(exponent-127), but that's the same thing. Either way, the result is 1.0

That should clear it up for you.

UPDATE it was pointed out in the comments that my code example may invoke undefined behavior, although my gcc compiler generated no warnings / errors. The below code is a more correct way to prove that 1.0 is 1065353216 in int (for 32 bit int and float...):

#include <stdio.h>

union {
  int i;
  float a;
} either;

int main(void) {
  either.a = 1.0;
  printf("in hex, this is %08x\n", either.i);
  printf("the int representation is %d\n", either.i);
  return 0;
}