Why does Python show `0.2 + 0.1` as `0.30000000000

2019-03-03 10:36发布

This question already has an answer here:

I have written the following code for generating a range with floats:

def drange(start, stop, step):
    result = []
    value = start
    while value <= stop:
        result.append(value)
        value += step
    return result

When calling this function with this statement:

print drange(0.1,1.0,0.1)

I expected to obtain this:

[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]

But I obtain the following, instead:

[0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999]

Why is this?, and how can I fix it?

Thanks!

1条回答
狗以群分
2楼-- · 2019-03-03 11:32

That's how floating-point numbers work. You can't represent an infinite number of real numbers in a finite number of bits, so there is some truncation. You should take a look at What Every Programmer Should Know About Floating-Point Arithmetic:

Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Use round(number, k) to round a given floating-point value to k digits after the decimal (so in your case, use round(number, 1) for one digit).

查看更多
登录 后发表回答