How deal with the fact that most decimal fractions

2019-04-29 16:03发布

问题:

So, we know that fractions such as 0.1, cannot be accurately represented in binary base, which cause precise problems (such as mentioned here: Formatting doubles for output in C#).

And we know we have the decimal type for a decimal representation of numbers... but the problem is, a lot of Math methods, do not supporting decimal type, so we have convert them to double, which ruins the number again.

so what should we do?

回答1:

For a comprehensive examination of the challenges involved in performing floating-point calculations, see this article:

What Every Computer Scientist Should Know About Floating-Point Arithmetic http://docs.sun.com/source/806-3568/ncg_goldberg.html



回答2:

Oh, what should we do about the fact that most decimal fractions cannot be represented in binary? or for that matter, that binary fractions cannot be represented in Decimal ?

or, even, that an infinity (in fact, a non-countable infinity) of real numbers in all bases cannot be accurately represented in any computerized system??

nothing! To recall an old cliche, You can get close enough for government work... In fact, you can get close enough for any work... There is no limit to the degree of accuracy the computer can generate, it just cannot be infinite, (which is what would be required for a number representation scheme to be able to represent every possible real number)

You see, for every number representation scheme you can design, in any computer, it can only represent a finite number of distinct different real numbers with 100.00 % accuracy. And between each adjacent pair of those numbers (those that can be represented with 100% accuracy), there will always be an infinity of other numbers that it cannot represent with 100% accuracy.



回答3:

so what should we do?

We just keep on breathing. It really isn't a structural problem. We have a limited precision but usually more than enough. You just have to remember to format/round when presenting the numbers.

The problem in the following snippet is with the WriteLine(), not in the calculation(s):

double x = 6.9 - 10 * 0.69;
Console.WriteLine("x = {0}", x);

If you have a specific problem, th post it. There usually are ways to prevent loss of precision. If you really need >= 30 decimal digits, you need a special library.



回答4:

Keep in mind that the precision you need, and the rounding rules required, will depend on your problem domain.

If you are writing software to control a nuclear reactor, or to model the first billionth of a second of the universe after the big bang (my friend actually did that), you will need much higher precision than if you are calculating sales tax (something I do for a living).

In the finance world, for example, there will be specific requirements on precision either implicitly or explicitly. Some US taxing jurisdictions specify tax rates to 5 digits after the decimal place. Your rounding scheme needs to allow for that much precision. When much of Western Europe converted to the Euro, there was a very specific approach to rounding that was written into law. During that transition period, it was essential to round exactly as required.

Know the rules of your domain, and test that your rounding scheme satisfies those rules.



回答5:

I think everyone implying: Inverting a sparse matrix? "There's an app for that", etc, etc

Numerical computation is one well-flogged horse. If you have a problem, it was probably put to pasture before 1970 or even much earlier, carried forward library by library or snippet by snippet into the future.



回答6:

you could shift the decimal point so that the numbers are whole, then do 64 bit integer arithmetic, then shift it back. Then you would only have to worry about overflow problems.



回答7:

And we know we have the decimal type for a decimal representation of numbers... but the problem is, a lot of Math methods, do not supporting decimal type, so we have convert them to double, which ruins the number again.

Several of the Math methods do support decimal: Abs, Ceiling, Floor, Max, Min, Round, Sign, and Truncate. What these functions have in common is that they return exact results. This is consistent with the purpose of decimal: To do exact arithmetic with base-10 numbers.

The trig and Exp/Log/Pow functions return approximate answers, so what would be the point of having overloads for an "exact" arithmetic type?