I've often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int
and a float
. For example, 2 * f
, might simply increment the exponent of f
by 1, saving some cycles. Do the compilers, perhaps if one requests them to do so (e.g. via -ffast-math
), in general, do it?
Are compilers generally smart enough to do this, or do I need to do this myself using the scalb*()
or ldexp()/frexp()
function family?
It may be useful for embedded systems compilers to have special scale-by-power-of-two pseudo-op which could be translated by the code generator in whatever fashion was optimal for the machine in question, since on some embedded processors focusing on the exponent may be an order of magnitude faster than doing a full power-of-two multiplication, but on the embedded micros where multiplication is slowest, a compiler could probably achieve a bigger performance boost by having the floating-point-multiply routine check its arguments at run-time so as to skip over parts of the mantissa that are zero.
Actually, this is what happens in the hardware.
The
2
is also passed into the FPU as a floating point number, with a mantissa of 1.0 and an exponent of 2^1. For the multiplication, the exponents are added, and the mantissas multiplied.Given that there is dedicated hardware to handle the complex case (multiplying with values that are not powers of two), and the special case is not handled any worse than it would be using dedicated hardware, there is no point in having additional circuitry and instructions.