I am seeking some more understanding about the "resolution" parameter of a numpy float (I guess any computer defined float for that matter).
Consider the following script:
import numpy as np
a = np.finfo(10.1)
print a
I get an output which among other things prints out:
precision=15 resolution= 1.0000000000000001e-15
max= 1.797(...)e+308
min= -max
The numpy documentation specifies: "resolution: (floating point number of the appropriate type) The approximate decimal resolution of this type, i.e., 10**-precision." source
resolution is derived from precision, but unfortunately this definition is somewhat circular "precision (int): The approximate number of decimal digits to which this kind of float is precise." source
I understand that floating point numbers are merely finite representations of real numbers and therefore have error in their representation, and that precision is probably a measure of this deviation. But practically, does it mean that I should expect results to be erroneous if I preform operations using numbers less than the resolution? How can I quantify the error, for say addition, of two floating point numbers given their precision? If the resolution is as "large" as 1e-15, why would the smallest allowable number be on the order of 1e-308?
Thank you in advance!
The short answer is "dont' confuse
numpy.finfo
withnumpy.spacing
".finfo
operates on thedtype
, whilespacing
operates on the value.Background Information
First, though, some general explanation:
The key part to understand is that floating point numbers are similar to scientific notation. Just like you'd write 0.000001 as
1.0 x 10^-6
, floats are similar toc x 2^q
. In other words, they have two separate parts - a coefficient (c
, a.k.a. "significand") and an exponent (q
). These two values are stored as integers.Therefore, how closely a value can be represented (let's think of this as the degree of discretization) is a function of both parts, and depends on the magnitude of the value.
However, the "precision" (as referred to by
np.finfo
) is essentially the number of significant digits if the number were written in base-10 scientific notation. The "resolution" is the resolution of the coefficient (part in front) if the value were written in the same base-10 scientific notation (i.e.10^-precision
). In other words, both are only a function of the coefficient.Numpy-specific
For
numpy.finfo
, "precision" and "resolution" are simply the inverse of each other. Neither one tells you how closely a particular number is being represented. They're purely a function of thedtype
.Instead, if you're worried about the absolute degree of discretization, use
numpy.spacing(your_float)
. This will return the difference in the next largest value in that particular format (e.g. it's different for afloat32
than afloat64
).Examples
As an example:
But the precision and resolution don't change:
Also note that all of these depend on the data type that you're using:
Specific Questions
Now on to your specific questions:
No, because the precision/resolution (in
numpy.finfo
terms) is only a function of the coefficient, and doesn't take into account the exponent. Very small and very large numbers have the same "precision", but that's not an absolute "error".As a rule of thumb, when using the "resolution" or "precision" terms from
finfo
, think of scientific notation. If we're operating on small numbers with similar magnitudes, we don't need to worry about much.Let's take the decimal math case with 6 significant digits (somewhat similar to a
float32
):However, if we operate on numbers with wildly different magnitudes but limited precision (again, 6 significant digits):
We'll start to see the effects quite clearly.
Use
np.spacing(result)
.Again, the "resolution" in this case doesn't take into account the exponent, just the part in front.
Hopefully that helps clarify things somewhat. All of this is a bit confusing, and everyone gets bitten by it at some point. It's good to try to build up a bit of intuition about it and to know what functions to call to find out exactly in your platform-of-choice!