I want to implement the equivalent of C's uint
-to-double
cast in the GHC Haskell compiler. We already implement int
-to-double
using FILD
or CVTSI2SD
. Is there unsigned versions of these operations or am I supposed to zero out the highest bit of the uint
before the conversion (thus losing range)?
相关问题
- Null-terminated string, opening file for reading
- What's the difference between 0 and dword 0?
- Floating point again
- Is there a functional difference between “2.00” an
- Translate the following machine language code (0x2
相关文章
- How can I convert a f64 to f32 and get the closest
- Macro or function to construct a float (double) fr
- How to generate assembly code with gcc that can be
- Select unique/deduplication in SSE/AVX
- Math.Max vs Enumerable.Max
- `std::sin` is wrong in the last bit
- Optimising this C (AVR) code
- Why does the latency of the sqrtsd instruction cha
There is a better way
You can exploit some of the properties of the IEEE double format and interpret the unsigned value as part of the mantissa, while adding some carefully crafted exponent.
The 1075 comes from the IEEE exponent bias (1023) for doubles and a "shift" amount of 52 bits for your mantissa. Note that there is a implicit "1" leading the mantissa, which needs to be subtracted later.
So:
If you don't have native 64 bit on you platform a version using SSE for the integer steps might be beneficial, but that depends of course.
On my platform this compiles to
which looks pretty good. The
0x0(%rip)
is the magic double constant, and if inlined some instructions like the upper 32 bit zeroing and the constant reload will vanish.If I'm understanding you correctly you should be able to move your 32-bit uint to a temp area on stack, zero out the next dword, then use fild qword ptr to load the now 64-bit unsigned integer as a double.
If you want exactly x87 FILD opcode to use, just shift uint64 to uint63 (div 2) and then mul it by 2 back, but already as double, so the x87 uint64-to-double conversion requires one FMUL execution in overhead.
The example: 0xFFFFFFFFFFFFFFFFU -> +1.8446744073709551e+0019
it was unable to post the code example in the strict form rules. I'll try later.
VC produced x86 output
i posted (probably i manually removed all incorrected ascii chars in text file).
As someone said, "Good Artists Copy; Great Artists Steal". So we can just check how other compiler writers solved this issue. I used a simple snippet:
(volatiles added to ensure the compiler does not optimize out the conversions)
Results (irrelevant instructions skipped):
Visual C++ 2010 cl /Ox (x86)
So basically the compiler is adding an adjustment value in case the sign bit was set.
Visual C++ 2010 cl /Ox (x64)
No need to adjust here because the compiler knows that
rax
will have the sign bit cleared.Visual C++ 2012 cl /Ox
This uses branchless code to add 0 or the magic adjustment depending on whether the sign bit was cleared or set.