When testing my library, Construct, I found out that tests fail when numbers are built then parsed back to a float. Should floats not be represented exactly as in-memory floats?
In [14]: d = struct.Struct("<f")
In [15]: d.unpack(d.pack(1.23))
Out[15]: (1.2300000190734863,)
Floating point is inherently imprecise, but you are packing a double-precision float (binary64
) into a single-precision (binary32
) space there. See Basic and interchange formats in the Wikipedia article on IEEE floating point formats; the Python float
format uses double precision (see the standard types docs; Floating point numbers are usually implemented using double in C).
Use d
to use double precision:
>>> import struct
>>> d = struct.Struct("<d")
>>> d.unpack(d.pack(1.23))
(1.23,)
From the Format characters section:
format: f
, C Type: float
, Python type: float
, Standard size: 4
, Footnote: (5)
format: d
, C Type: double
, Python type: float
, Standard size: 8
, Footnote: (5)
- For the
'f'
and 'd'
conversion codes, the packed representation uses the IEEE 754 binary32 (for 'f'
) or binary64 (for 'd'
) format, regardless of the floating-point format used by the platform.