How is the decimal
type implemented?
Update
- It's a 128-bit value type (16 bytes)
- 1 sign bit
- 96 bits (12 bytes) for the mantissa
- 8 bits for the exponent
- remaining bits (23 of them!) set to 0
Thanks! I'm gonna stick with using a 64-bit long with my own implied scale.
Decimal Floating Point article on Wikipedia with specific link to this article about System.Decimal
.
A decimal is stored in 128 bits, even though only 102 are strictly necessary. It is convenient to consider the decimal as three 32-bit integers representing the mantissa, and then one integer representing the sign and exponent. The top bit of the last integer is the sign bit (in the normal way, with the bit being set (1) for negative numbers) and bits 16-23 (the low bits of the high 16-bit word) contain the exponent. The other bits must all be clear (0). This representation is the one given by decimal.GetBits(decimal) which returns an array of 4 ints.
As described on MSDN's Decimal Structure page at http://msdn.microsoft.com/en-us/library/system.decimal(VS.80).aspx:
The binary representation of a Decimal
value consists of a 1-bit sign, a
96-bit integer number, and a scaling
factor used to divide the 96-bit
integer and specify what portion of it
is a decimal fraction. The scaling
factor is implicitly the number 10,
raised to an exponent ranging from 0
to 28. Therefore, the binary
representation of a Decimal value is
of the form, ((-296 to 296) / 10(0 to
28)), where -296-1 is equal to
MinValue, and 296-1 is equal to
MaxValue.
The scaling factor also preserves any
trailing zeroes in a Decimal number.
Trailing zeroes do not affect the
value of a Decimal number in
arithmetic or comparison operations.
However, trailing zeroes can be
revealed by the ToString method if an
appropriate format string is applied.
From the C# Language Specifications:
The decimal
type is a 128-bit data type suitable for financial and monetary calculations.
The decimal
type can represent values ranging from 1.0 × 10−28 to approximately 7.9 × 1028 with 28-29 significant digits.
The finite set of values of type decimal
are of the form (–1)s × c × 10-e, where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < 296, and the scale e is such that 0 ≤ e ≤ 28.
The decimal
type does not support signed zeros, infinities, or NaN's. A decimal
is represented as a 96-bit integer scaled by a power of ten. For decimals with an absolute value less than 1.0m, the value is exact to the 28th decimal place, but no further.
For decimals with an absolute value greater than or equal to 1.0m, the value is exact to 28 or 29 digits. Contrary to the float
and double
data types, decimal fractional numbers such as 0.1 can be represented exactly in the decimal representation.
In the float
and double
representations, such numbers are often infinite fractions, making those representations more prone to round-off errors.
If one of the operands of a binary operator is of type decimal
, then the other operand must be of an integral type or of type decimal
. If an integral type operand is present, it is converted to decimal
before the operation is performed.
The result of an operation on values of type decimal
is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as “banker’s rounding”). A zero result always has a sign of 0 and a scale of 0.
If a decimal arithmetic operation produces a value less than or equal to 5 × 10-29 in absolute value, the result of the operation becomes zero. If a decimal arithmetic operation produces a result that is too large for the decimal
format, a System.OverflowException
is thrown.
The decimal
type has greater precision but smaller range than the floating-point types. Thus, conversions from the floating-point types to decimal
might produce overflow exceptions, and conversions from decimal
to the floating-point types might cause loss of precision. For these reasons, no implicit conversions exist between the floating-point types and decimal
, and without explicit casts, it is not possible to mix floating-point and decimal
operands in the same expression.
From "CLR via C#" 3rd Edition by J.Richter:
A 128-bit high-precision
floating-point value commonly used for
financial calculations in which
rounding errors can’t be tolerated. Of
the 128 bits, 1 bit represents the
sign of the value, 96 bits represent
the value itself, and 8 bits represent
the power of 10 to divide the 96-bit
value by (can be anywhere from 0 to
28). The remaining bits are unused.
The decimal keyword denotes a 128-bit data type.
Source
The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28. Therefore, the binary representation of a Decimal value is of the form, ((-296 to 296) / 10(0 to 28)), where -296-1 is equal to MinValue, and 296-1 is equal to MaxValue.
Source
The decimal type is just another form
of floating point number - but unlike
float and double, the base used is 10.
A simple explanation is here http://csharpindepth.com/Articles/General/Decimal.aspx