First of all, this is not a floating point newbie question. I know results of floating point arithmetic (not to mention transcendental functions) usually cannot be represented exactly, and that most terminating decimals cannot be represented exactly as binary floating point numbers.
That said, each possible floating point value corresponds exactly to a diadic rational (a rational number p/q
where q
is a power of 2), which in turn has an exact decimal representation.
My question is: How do you find this exact decimal representation efficiently? sprintf
and similar functions are usually only specified up to a number of significant digits to uniquely determine the original floating point value; they don't necessarily print the exact decimal representation. I know one algorithm I've used, but it's very slow, O(e^2)
where e
is the exponent. Here's an outline:
- Convert the mantissa to a decimal integer. You can either do this by pulling apart the bits to read the mantissa directly, or you can write a messy floating point loop that first multiplies the value by a power of two to put it in the range 1<=x<10, then pulls off a digit at a time by casting to int, subtracting, and multiplying by 10.
- Apply the exponent by repeatedly multiplying or dividing by 2. This is an operation on the string of decimal digits you generated. Every ~3 multiplications will add an extra digit to the left. Every single dividion will add an extra digit to the right.
Is this really the best possible? I doubt it, but I'm not a floating-point expert and I can't find a way to do the base-10 computations on the floating point representation of the number without running into a possibility of inexact results (multiplying or dividing by anything but a power of 2 is a lossy operation on floating point numbers unless you know you have free bits to work with).
This question has a bureaucratic part and an algorithmic part. A floating point number is stored internally as (2e × m), where e is an exponent (itself in binary) and m is a mantissa. The bureaucratic part of the question is how to access this data, but R. seems more interested in the algorithmic part of the question, namely converting (2e × m) to a fraction (a/b) in decimal form. The answer to the bureaucratic question in several languages is "frexp" (which is an interesting detail that I didn't know before today).
It is true that at first glance, it takes O(e2) work just to write 2e in decimal, and more time still for the mantissa. But, thanks to the magic of the Schonhage-Strassen fast multiplication algorithm, you can do it in Õ(e) time, where the tilde means "up to log factors". If you view Schonhage-Strassen as magic, then it's not that hard to think of what to do. If e is even, you can recursively compute 2e/2, and then square it using fast multiplication. On the other hand if e is odd, you can recursively compute 2e-1 and then double it. You have to be careful to check that there is a version of Schonhage-Strassen in base 10. Although it is not widely documented, it can be done in any base.
Converting a very long mantissa from binary to base 10 is not exactly the same question, but it has a similar answer. You can divide the mantissa into two halves, m = a 2k + b. Then recursively convert a and b to base 10, convert 2^k to base 10, and do another fast multiplication to compute m in base 10.
The abstract result behind all of this is that you can convert integers from one base to another in Õ(N) time.
If the question is about standard 64-bit floating point numbers, then it's too small for the fancy Schonhage-Strassen algorithm. In this range you can instead save work with various tricks. One approach is to store all 2048 values of 2e in a lookup table, and then work in the mantissa with asymmetric multiplication (in between long multiplication and short multiplication). Another trick is to work in base 10000 (or a higher power of 10, depending on architecture) instead of base 10. But, as R. points out in the comments, 128-bit floating point numbers already allow large enough exponents to call into question both lookup tables and standard long multiplication. As a practical matter, long multiplication is the fastest up to a handful of digits, then in a significant medium range one can use Karatsuba multiplication or Toom-Cook multiplication, and then after that a variation of Schonhage-Strassen is best not just in theory but also in practice.
Actually, the big integer package GMP already has Õ(N) time radix conversion, as well as good heuristics for which choice of multiplication algorithm. The only difference between their solution and mine is that instead of doing any big arithmetic in base 10, they compute large powers of 10 in base 2. In this solution, they also need fast division, but that can be obtained from fast multiplication in any of several ways.
I see you've accepted an answer already but here are a couple of open source implementations of this conversion you might want to look at:
David Gay's dtoa()
function in dtoa.c
(http://www.netlib.org/fp/dtoa.c).
The function ___printf_fp()
in file /stdio-common/printf_fp.c
in glibc (http://ftp.gnu.org/gnu/glibc/glibc-2.11.2.tar.gz, for example).
Both will print as many digits as you ask for in a %f
type printf
(as I've written about here: http://www.exploringbinary.com/print-precision-of-dyadic-fractions-varies-by-language/ and http://www.exploringbinary.com/print-precision-of-floating-point-integers-varies-too/).
There's been a lot of work on printing floating-point numbers. The gold standard is to print out a decimal equivalent of minimal length such that when the decimal equivalent is read back in, you get the same floating-point number that you started with, no matter what the rounding mode is during readback. You can read about the algorithm in the excellent paper by Burger and Dybvig.
Although it's C# and your question is tagged with C, Jon Skeet has code to convert a double
to its exact representation as a string: http://www.yoda.arachsys.com/csharp/DoubleConverter.cs
From a quick glance, it does not appear to be too hard to port to C, and even easier to write in C++.
Upon further reflection, it appears that Jon's algorithm is also O(e^2), as it also loops over the exponent. However, that means the algorithm is O(log(n)^2) (where n is the floating-point number), and I'm not sure you can convert from base 2 to base 10 in better than log-squared time.
Well being no floating point expert myself, I'd defer to using a well tested open source library.
The GNU MPFR is a good one.
The MPFR library is a C library for
multiple-precision floating-point
computations with correct rounding.
The main goal of MPFR is to provide a
library for multiple-precision
floating-point computation which is
both efficient and has a well-defined
semantics.
sprintf and similar functions are
usually only specified up to a number
of significant digits to uniquely
determine the original floating point
value; they don't necessarily print
the exact decimal representation.
You can ask for more significant digits than the default:
printf("%.100g\n", 0.1);
prints 0.1000000000000000055511151231257827021181583404541015625
.
If you want more exact results, why not use fixed point math instead? Conversions are quick. Error is known and can be worked around. Not an exact answer to your question, but a different idea for you.
Off the top of my head, why not break the exponent down into a sum of binary exponents first, then all your operations are loss-less.
I.e.
10^2 = 2^6 + 2^5 + 2^2
Then sum:
mantissa<<6 + mantissa<<5 + mantissa<<2
I'm thinking that breaking it down would be on the O(n) on the the number of digits, the shifting is O(1), and the summing is O(n) digits...
You would have to have an integer class big enough to store the results, of course...
Let me know - I'm curious about this, it really made me think. :-)
You don't. The closest you can come to that is dumping the bytes.