When dealing with floating point values in Java, calling the toString() method gives a printed value that has the correct number of floating point significant figures. However, in C++, printing a float via stringstream will round the value after 5 or less digits. Is there a way to "pretty print" a float in C++ to the (assumed) correct number of significant figures?
EDIT: I think I am being misunderstood. I want the output to be of dynamic length, not a fixed precision. I am familiar with setprecision. If you look at the java source for Double, it calculates the number of significant digits somehow, and I would really like to understand how it works and/or how feasible it is to replicate this easily in C++.
/*
* FIRST IMPORTANT CONSTRUCTOR: DOUBLE
*/
public FloatingDecimal( double d )
{
long dBits = Double.doubleToLongBits( d );
long fractBits;
int binExp;
int nSignificantBits;
// discover and delete sign
if ( (dBits&signMask) != 0 ){
isNegative = true;
dBits ^= signMask;
} else {
isNegative = false;
}
// Begin to unpack
// Discover obvious special cases of NaN and Infinity.
binExp = (int)( (dBits&expMask) >> expShift );
fractBits = dBits&fractMask;
if ( binExp == (int)(expMask>>expShift) ) {
isExceptional = true;
if ( fractBits == 0L ){
digits = infinity;
} else {
digits = notANumber;
isNegative = false; // NaN has no sign!
}
nDigits = digits.length;
return;
}
isExceptional = false;
// Finish unpacking
// Normalize denormalized numbers.
// Insert assumed high-order bit for normalized numbers.
// Subtract exponent bias.
if ( binExp == 0 ){
if ( fractBits == 0L ){
// not a denorm, just a 0!
decExponent = 0;
digits = zero;
nDigits = 1;
return;
}
while ( (fractBits&fractHOB) == 0L ){
fractBits <<= 1;
binExp -= 1;
}
nSignificantBits = expShift + binExp +1; // recall binExp is - shift count.
binExp += 1;
} else {
fractBits |= fractHOB;
nSignificantBits = expShift+1;
}
binExp -= expBias;
// call the routine that actually does all the hard work.
dtoa( binExp, fractBits, nSignificantBits );
}
After this function, it calls dtoa( binExp, fractBits, nSignificantBits );
which handles a bunch of cases - this is from OpenJDK6
For more clarity, an example: Java:
double test1 = 1.2593;
double test2 = 0.004963;
double test3 = 1.55558742563;
System.out.println(test1);
System.out.println(test2);
System.out.println(test3);
Output:
1.2593
0.004963
1.55558742563
C++:
std::cout << test1 << "\n";
std::cout << test2 << "\n";
std::cout << test3 << "\n";
Output:
1.2593
0.004963
1.55559
There is a utility called numeric_limits:
Note that IEEE numbers are not represented exactly bydecimal digits. These are binary quantities. A more accurate number is the number of binary bits:
To pretty print all the significant digits use setprecision with this:
You can use the ios_base::precision technique where you can specify the number of digits you want
For example
The above code with output
3.1416
3.14159
3.1415900000
I think you are talking about how to print the minimum number of floating point digits that allow you to read the exact same floating point number back. This paper is a good introduction to this tricky problem.
http://grouper.ieee.org/groups/754/email/pdfq3pavhBfih.pdf
The dtoa function looks like David Gay's work, you can find the source here http://www.netlib.org/fp/dtoa.c (although this is C not Java).
Gay also wrote a paper about his method. I don't have a link but it's referenced in the above paper so you can probably google it.