可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
TL;DR1 What is an accurate and maintainable approach for representing currency or money in C?
Background to the question:
This has been answered for a number of other languages, but I could not find a solid answer for the C language.
C# What data type should I use to represent money in C#?
Java Why not use Double or Float to represent currency?
Objective-C How to represent money in Objective-C / iOS?
Note: There's plenty more similar questions for other languages, I just pulled a few for representational purposes.
All of those questions can be distilled down to "use a decimal
data type" where the specific type may vary based upon the language.
There is a related question that ends up suggesting using a "fixed point" approach, but none of the answers address using a specific data type in C.
Likewise, I have looked at arbitrary precision libraries such as GMP, but it's not clear to me if this is the best approach to use or not.
Simplifying Assumptions:
Presume an x86 or x64 based architecture, but please call out any assumptions that would impact a RISC based architecture such as a Power chip or an Arm chip.
Accuracy in calculations is the primary requirement. Ease of maintenance would be the next requirement. Speed of calculations is important, but is tertiary to the other requirements.
Calculations need to be able to safely support operations accurate to the mill as well as supporting values ranging up to the trillions (10^9)
Differences from other questions:
As noted above, this type of question has been asked before for multiple other languages. This question is different from the other questions for a couple of reasons.
Using the accepted answer from: Why not use Double or Float to represent currency?, let's highlight the differences.
(Solution 1) A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. (Solution 2) Among others, Java has the BigDecimal class, and C# has the decimal type.
Emphasis added to highlight the two suggested solutions
The first solution is essentially a variant of the "fixed point" approach. There is a problem with this solution in that the suggested range (tracking cents) is insufficient for mill based calculations and significant information will be lost on rounding.
The other solution is to use a native decimal
class which is not available within C.
Likewise, the answer doesn't consider other options such as creating a struct for handling these calculations or using an arbitrary precision library. Those are understandable differences as Java doesn't have structs and why consider a 3rd party library when there is native support within the language.
This question is different from that question and other related questions because C doesn't have the same level of native type support and has language features that the other languages don't. And I haven't seen any of the other questions address the multiple ways that this could be approached in C.
The Question:
From my research, it seems that float
is not an appropriate data type to use to represent currency within a C program due to floating point error.
What should I use to represent money in C, and why is that approach better than other approaches?
1 This question started in a shorter form, but feedback received indicated the need to clarify the question.
回答1:
Either use an integer data type (long long, long, int) or a BCD (binary coded decimal) arithmetic library. You should store tenths or hundredths of the smallest amount you will display. That is, if you are using US dollars and presenting cents (hundredths of a dollar), your numeric values should be integers representing mills or millrays (tenths or hundredths of a cent). The extra significant figures will ensure your interest and similar calculations round consistently.
If you use an integer type, make sure that its range is great enough to handle the amounts of concern.
回答2:
The best money/currency representation is to use a higher enough precision floating point type like double
that has FLT_RADIX == 10
. These platforms/compliers are rare as the vast majority of systems have FLT_RADIX == 2
.
Four alternatives: integers, non-decimal floating point, special decimal floating point, user defined structure.
Integers: A common solution uses the integer count of the smallest denomination in the currency of choice. Example counting US cents instead of dollars. The range of integers needs to be reasonable wide. Something like long long
instead of int
as int
may only handle about +/- $320.00. This works fine for simple accounting tasks involving add/subtract/multiple but begins to crack with divisions and complex functions as used in interest calculations. Monthly payment formula. Signed integer math has no overflow protection. Care needs to be applied when rounding division results. q = (a + b/2)/b
is not good enough.
Binary floating point: 2 common pitfalls: 1) using float
which is so often of insufficient precision and 2) incorrect rounding. Using double
well addresses problem #1 for many accounting limits. Yet code still often needs to use a round to the desired minimum currency unit for satisfactory results.
// Sample - does not properly meet nuanced corner cases.
double RoundToNearestCents(double dollar) {
return round(dollar * 100.0)/100.0;
}
A variation on double
is to use a double
amount of the smallest unit (0.01 or 0.001). An important advantage is the ability to round simply by using the round()
function which by itself meets corner cases.
Special decimal floating point Some systems provide a "decimal" type other than double
that meets decimal64 or something similar. Although this handles most above issues, portability is sacrificed.
User defined structure (like fixed-point) of course can solve everything except it is error prone to code so much and it is work (Instead). The result may function perfectly yet lack performance.
Conclusion This is a deep subject and each approach deserve a more expansive discussion. The general answer is: there is no general solution as all approaches have significant weaknesses. So it depends on the specifics of the application.
[Edit]
Given OP's additional edits, recommend using double
number of the smallest unit of currency (example: $0.01 --> double money = 1.0;
). At various points in code whenever an exact value is required, use round()
.
double interest_in_cents = round(
Monthly_payment(0.07/12 /* percent */, N_payments, principal_in_cents));
My crystal ball says by 2022 the U.S. will drop the $0.01 and the smallest unit will be $0.05. I would use the approach that can best handle that shift.
回答3:
If speed is your primary concern, then use an integral type scaled to the smallest unit you need to represent (such as a mill, which is 0.001 dollars or 0.1 cents). Thus, 123456
represents $123.456
.
The problem with this approach is that you may run out of digits; a 32-bit unsigned int can represent something like 10 decimal digits, so the largest value you could represent under this scheme would be $9,999,999.999
. Not good if you need to deal with values in the billions.
Another approach is to use a struct type with one integral member to represent the whole dollar amount, and another integral member to represent the fractional dollar amount (again, scaled to the smallest unit you need to represent, whether it's cents, mills, or something smaller), similar to the timeval
struct that saves whole seconds in one field and nanoseconds in the other:
struct money {
long whole_dollars; // long long if you have it and you need it
int frac_dollar;
};
An int
is more than wide enough to handle the scaling any sane person would use. Leaving it signed in case the whole_dollars
portion is 0.
If you're more worried about storing arbitrarily large values, there's always BCD, which can represent way more digits than any native integral or floating-point type.
Representation's only half the battle, though; you also have to be able to perform arithmetic on these types, and operations on currency may have very specific rounding rules. So you'll want to keep that in consideration when deciding on your representation.
回答4:
int
(32 or 64 as you need) and think in cents or partial cents as needed. With 32 bit and thinking in cents you can represent up to 40 million dollar in a single value. With 64 bit it's well beyond all the US dept ever combined.
There are some gotchas when doing the calculations you have to be aware of so you don't divide away half the significant numbers.
It's a game of knowing the ranges and when the rounding after division is fine.
For example doing a proper round (of the .5 up variaty) after division can be done by first adding half the numerator to the value and then doing the division. Though if you are doing finance you will need a bit more advanced round system though approved by your accountants.
long long res = (amount * interest + 500)/1000;
Only convert to dollar (or whatever) when communicating to the user.