My team is working with financial software that exposes monetary values as C# floating point doubles. Occasionally, we need to compare these values to see if they equal zero, or fall under a particular limit. When I noticed unexpected behavior in this logic, I quickly learned about the rounding errors inherent in floating point doubles (e.g. 1.1 + 2.2 = 3.3000000000000003). Up until this point, I have primarily used C# decimals to represent monetary values.
My team decided to resolve this issue by using the epsilon value approach. Essentially, when you are compare two numbers, if the difference between those two numbers is less than epsilon, they are considered equal. We implemented this approach in a similar way as described in the article below: https://www.codeproject.com/Articles/383871/Demystify-Csharp-floating-point-equality-and-relat
Our challenge has been determining an appropriate value for epsilon. Our monetary values can have up to 3 digits to the right of the decimal point (scale = 3). This means that the largest epsilon we could use is .0001 (anything larger and the 3rd digit gets ignored). Since epsilon values are supposed to be small, we decided to move it out one more decimal point to .00001 (just to be safe, you could say). C# doubles have a precision of at least 15 digits, so I believe this value of epsilon should work if the number to the left of the decimal point is less or equal to 10 digits (15 - 5 = 10, where 5 is the number of digits epsilon is to the right of the decimal point). With 10 digits, we can represent values into the billions, up to 9,999,999,999.999. It's possible that we may have numbers in the hundreds of millions, but we don't expect to go into the billions, so this limit should suffice.
Is my rationale for choosing this value of epsilon correct? I found a lot of resources that discuss this approach, but I couldn’t find many resources that provide guidance on choosing epsilon.