Python has default round() function, but I was programming with cython and want to replace pythonic code with numpy function. However, I got the following results when experimenting it in terminal.
>>> np.around(1.23456789)
1.0
>>> np.around(1.23456789, decimals=0)
1.0
>>> np.around(1.23456789, decimals=1)
1.2
>>> np.around(1.23456789, decimals=2)
1.23
>>> np.around(1.23456789, decimals=3)
1.2350000000000001
>>> np.around(1.23456789, decimals=4)
1.2345999999999999
Which is kind of strange, and I still want the following "desired" result:
>>> round(1.23456789,3)
1.235
>>> round(1.23456789,4)
1.2346
The problem is that the binary representation of floating point numbers can't exactly represent most decimal numbers. For example, the two closest values to 1.235 are:
- 1.2350000000000000976996261670137755572795867919921875
- 1.234999999999999875655021241982467472553253173828125
Since the first one is closer to the desired value, it's the one you get.
When you let the Python environment display a floating-point number, it uses the __repr__
conversion function which shows enough digits to unambiguously identify the number. If you use the __str__
conversion instead, it should round the number to a reasonable number of digits. At least that's what the built-in float
type does, I assume numpy works the same way. The print
function calls __str__
by default, so try this:
print np.around(1.23456789, decimals=3)
For applications where you absolutely need decimal accuracy there is the decimal
module. It can do rounding as well.