Let's say I am writing a unit test for a function that returns a floating point number, I can do it as such in full precision as per my machine:
>>> import unittest
>>> def div(x,y): return x/float(y)
...
>>>
>>> class Testdiv(unittest.TestCase):
... def testdiv(self):
... assert div(1,9) == 0.1111111111111111
...
>>> unittest.main()
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Will the same full floating point precision be the same across OS/distro/machine?
I could try to round off and do a unit test as such:
>>> class Testdiv(unittest.TestCase):
... def testdiv(self):
... assert round(div(1,9),4) == 0.1111
...
>>>
I could also do an assert with log(output)
but to keep to a fix decimal precision, I would still need to do rounding or truncating.
But what other way should one pythonically deal with unittesting for floating point output?
The
unittest.TestCase
class has specific methods for comparing floats:assertAlmostEqual
andassertNotAlmostEqual
. To quote the documentation:Thus, you could test the function like this:
Using the
TestCase.assert*
methods is preferred over bareassert
statements because the latter can be optimized out in some cases. Also, test failure messages produced by the methods are generally much more informative.The precision of
float
in Python is dependent on the underlying C representation. From Tutorial/Floating Point Arithmetic: Issues and Limitations, 15.1:As for testing, a better idea is to use existing functionality, e.g.
TestCase.assertAmostEqual
:Example:
If you prefer to stick to
assert
statement, you could use themath.isclose
(Python 3.5+):The default relative tolerance of
math.close
is 1e-09, "which assures that the two values are the same within about 9 decimal digits.". For more information aboutmath.close
see PEP 485.