I am trying to write a simple log base 2 method. I understand that representing something like std::log(8.0) and std::log(2.0) on a computer is difficult. I also understand std::log(8.0) / std::log(2.0) may result in a value very slightly lower than 3.0. What I do not understand is why putting the result of a the calculation below into a double and making it an lvalue then casting it to an unsigned int would change the result compared to casting the the formula directly. The following code shows my test case which repeatedly fails on my 32 bit debian wheezy machine, but passes repeatedly on my 64 bit debian wheezy machine.
#include <cmath>
#include "assert.h"
int main () {
int n = 8;
unsigned int i =
static_cast<unsigned int>(std::log(static_cast<double>(n)) /
std::log(static_cast<double>(2)));
double d =
std::log(static_cast<double>(n)) / std::log(static_cast<double>(2));
unsigned int j = static_cast<unsigned int> (d);
assert (i == j);
}
I also know I can use bit shifting to come up with my result in a more predictable way. I am mostly curious why casting the double that results int he operation is any different than sticking that value into a double on the stack and casting the double on the stack.