I want to convert a float
to a unsigned long
, while keeping the binary representation of the float
(so I do not want to cast 5.0
to 5
!).
This is easy to do in the following way:
float f = 2.0;
unsigned long x = *((unsigned long*)&f)
However, now I need to do the same thing in a #define
, because I want to use this later on in some array initialization (so an [inline] function is not an option).
This does not compile:
#define f2u(f) *((unsigned long*)&f)
If I call it like this:
unsigned long x[] = { f2u(1.0), f2u(2.0), f2u(3.0), ... }
The error I get is (logically):
lvalue required as unary ‘&’ operand
Note: One solution that was suggested below was to use a union
type for my array. However, that's no option. I'm actually doing the following:
#define Calc(x) (((x & 0x7F800000) >> 23) - 127)
unsigned long x[] = { Calc(f2u(1.0)), Calc(f2u(2.0)), Calc(f2u(3.0)), ... }
So the array really will/must be of type long[]
.
following along @caf's answer, you can use a
union
:this prints (under GCC 3.4.5, old I know :(, but thats all I have where I am atm, using -O3):
and the generated asm confirms its treating them as
unsigned long
s:Why not simply run a init function on the data yourself. You can update the unsigned long table with your calculations during runtime rather then compile time.
Sample output: