lossless conversion of float to string and back: i

2019-05-07 10:36发布

This question refers to the IEEE standard floating point numbers used on C/x86.

Is it possible to represent any numeric (i.e. excluding special values such as NaN) float or double as a decimal string such that converting that string back to a float/double will always yield exactly the original number?

If not, what algorithm tells me whether a given number will suffer a conversion error?

If so, consider this: some decimal fractions, when converted to binary, will not be numerically the same as the original decimal value, but the reverse is not true (because the binary has bounded precision so any decimal expansion is finite and perfect if not truncated), so here's another question...

Is it ever necessary to introduce deliberate errors into the decimal representation in order to trick the atof (or other) function into yielding the exact original number, or will a naive, non-truncating toString function be adequate (assuming exact conversion is possible in general)?

3条回答
戒情不戒烟
2楼-- · 2019-05-07 11:08

Given that the IEEE format can only represent a finite number of (binary) digits, and therefore have a minimum accuracy (cf. epsilon), you will only need a finite number of (decimal) digits. Of course it is preferable if the implementation (strtod, snprintf) has an identity mapping behavior between {all floats} and the set of {one decimal representation for each float}.

查看更多
Fickle 薄情
3楼-- · 2019-05-07 11:14

In java, it is possible to convert double from/to string, by constructing an intermediate BigDecimal object:

double doubleValue = ...;

// From double to string
String valueOfDoubleAsString = new BigDecimal(doubleValue).toString();
// And back
double doubleValueFromString = new BigDecimal(valueOfDoubleAsString).doubleValue();

// doubleValue == doubleValueFromString

There is no locale issue with this method. However, special double values (Infinite, NaN) will of course not work.

查看更多
4楼-- · 2019-05-07 11:21

According to this page:

Actually, the IEEE754-1985 standard says that 17 decimal digits is enough in all cases. However, it seems that the standard is a little vague on whether conforming implementations must guarantee lossless conversion when 17 digits are used.

So storing a double as a decimal string with at least 17 digits (correctly rounded) will guarantee that it can be converted back to binary double without any data loss.

In other words, if every single possible double-precision value were to be converted to a decimal string of 17 digits (correctly rounded), they will all map to different values. Thus there is no data-loss.


I'm not sure on the minimum cut-off for single-precision though. But I'd suspect that it will be 8 or 9 digits.

查看更多
登录 后发表回答