For this code block:
int num = 5;
int denom = 7;
double d = num / denom;
the value of d
is 0.0
. It can be forced to work by casting:
double d = ((double) num) / denom;
But is there another way to get the correct double
result? I don't like casting primitives, who knows what may happen.
What's wrong with casting primitives?
If you don't want to cast for some reason, you could do
Type Casting Is The Only Way
Producing a
double
from integer division- there is no other way without casting (may be you will not do it explicitly but it will happen).Now, there are several ways we can try to get precise
double
value (wherenum
anddenom
areint
type, and of-course with casting)-with explicit casting:
double d = (double) num / denom;
double d = ((double) num) / denom;
double d = num / (double) denom;
double d = (double) num / (double) denom;
but not
double d = (double) (num / denom);
with implicit casting:
double d = num * 1.0 / denom;
double d = num / 1d / denom;
double d = ( num + 0.0 ) / denom;
double d = num; d /= denom;
but not
double d = num / denom * 1.0;
and not
double d = 0.0 + ( num / denom );
You might consider wrapping the operations. For example:
This allows you to look up (just once) whether the cast does exactly what you want. This method could also be subject to tests, to ensure that it continues to do what you want. It also doesn't matter what trick you use to cause the division (you could use any of the answers here), as long as it results in the correct result. Anywhere you need to divide two integers, you can now just call
Utils::divide
and trust that it does the right thing.use something like:
(1d is a cast to double)