Currently learning C++ and this has just occurred to me. I'm just curious about this as I'm about do develop a simple bank program. I'll be using double
for calculating dollars/interest rate etc., but there are some tiny differences between computer calculations and human calculations.
I imagine that those extra .pennies in the real world can make all the difference!
In many cases, financial calculations are done using fixed-point arithmetic instead of floating point.
For example, the .NET Decimal
type, or the VB6 Currency
type. These are basically just integer types, where everyone has agreed that the units are some fraction of a cent, like $.0001.
And yes, some rounding has to occur, but it is done very systematically. Usually the rounding rules are somewhere deep in the fine print of your contract (the interest rate is x%, compounded every T, rounded up to the nearest penny, but not less than $y every statement period).
The range of a 8 byte long long is: -9223372036854775808 max: 9223372036854775807 do everything as thousands of a cent/penny and you still can handle numbers up to the trillions of dollars/pounds/whatever.
It depends on the application. All calculations with decimals will
require rounding when you output them as dollars and cents (or whatever
the local currency is): the base price of an article may only have two
digits after the decimal, but when you add on sales tax or VAT, there
will be more, and if you need to calculate interest on an investment,
there will be more.
Generally, using double
results in the most accurate results,
however... if your software is being used for some sort of bookkeeping
required by law (e.g. for tax purposes), you may be required to follow
standard accepted rounding practices, and these are based on decimal
arithmetic, not binary, hexadecimal or octal (which are the usual bases
for floating point—binary is universal on everything but
mainframes). In such cases, you'll need to use some sort of Decimal
class, which ensures the correct rounding. For other uses (e.g. risk
analysis), double
is fine.
Just because a number is not an integer does not mean that it cannot be calculated exactly. Consider that a dollars-and-cents value is an integer if one counts the number of pennies (cents), so it is a simple matter for a fixed-point library using two decimals of precision to simply multiply each number by 100, perform the calculation as an integer, and then divide by 100 again.