What would be the most efficient way to compare two double
or two float
values?
Simply doing this is not correct:
bool CompareDoubles1 (double A, double B)
{
return A == B;
}
But something like:
bool CompareDoubles2 (double A, double B)
{
diff = A - B;
return (diff < EPSILON) && (-diff < EPSILON);
}
Seems to waste processing.
Does anyone know a smarter float comparer?
I found that the Google C++ Testing Framework contains a nice cross-platform template-based implementation of AlmostEqual2sComplement which works on both doubles and floats. Given that it is released under the BSD license, using it in your own code should be no problem, as long as you retain the license. I extracted the below code from
http://code.google.com/p/googletest/source/browse/trunk/include/gtest/internal/gtest-internal.hhttps://github.com/google/googletest/blob/master/googletest/include/gtest/internal/gtest-internal.h and added the license on top.Be sure to #define GTEST_OS_WINDOWS to some value (or to change the code where it's used to something that fits your codebase - it's BSD licensed after all).
Usage example:
Here's the code:
EDIT: This post is 4 years old. It's probably still valid, and the code is nice, but some people found improvements. Best go get the latest version of
AlmostEquals
right from the Google Test source code, and not the one I pasted up here.For a more in depth approach read Comparing floating point numbers. Here is the code snippet from that link:
Comparing floating point numbers for depends on the context. Since even changing the order of operations can produce different results, it is important to know how "equal" you want the numbers to be.
Comparing floating point numbers by Bruce Dawson is a good place to start when looking at floating point comparison.
The following definitions are from The art of computer programming by Knuth:
Of course, choosing epsilon depends on the context, and determines how equal you want the numbers to be.
Another method of comparing floating point numbers is to look at the ULP (units in last place) of the numbers. While not dealing specifically with comparisons, the paper What every computer scientist should know about floating point numbers is a good resource for understanding how floating point works and what the pitfalls are, including what ULP is.
Be extremely careful using any of the other suggestions. It all depends on context.
I have spent a long time tracing a bugs in a system that presumed
a==b
if|a-b|<epsilon
. The underlying problems were:The implicit presumption in an algorithm that if
a==b
andb==c
thena==c
.Using the same epsilon for lines measured in inches and lines measured in mils (.001 inch). That is
a==b
but1000a!=1000b
. (This is why AlmostEqual2sComplement asks for the epsilon or max ULPS).The use of the same epsilon for both the cosine of angles and the length of lines!
Using such a compare function to sort items in a collection. (In this case using the builtin C++ operator == for doubles produced correct results.)
Like I said: it all depends on context and the expected size of
a
andb
.BTW,
std::numeric_limits<double>::epsilon()
is the "machine epsilon". It is the difference between 1.0 and the next value representable by a double. I guess that it could be used in the compare function but only if the expected values are less than 1. (This is in response to @cdv's answer...)Also, if you basically have
int
arithmetic indoubles
(here we use doubles to hold int values in certain cases) your arithmetic will be correct. For example 4.0/2.0 will be the same as 1.0+1.0. This is as long as you do not do things that result in fractions (4.0/3.0) or do not go outside of the size of an int.There are actually cases in numerical software where you want to check whether two floating point numbers are exactly equal. I posted this on a similar question
https://stackoverflow.com/a/10973098/1447411
So you can not say that "CompareDoubles1" is wrong in general.
I write this for java, but maybe you find it useful. It uses longs instead of doubles, but takes care of NaNs, subnormals, etc.
Keep in mind that after a number of floating-point operations, number can be very different from what we expect. There is no code to fix that.