Please consider the following code and comments:
Console.WriteLine(1 / 0); // will not compile, error: Division by constant zero
int i = 0;
Console.WriteLine(1 / i); // compiles, runs, throws: DivideByZeroException
double d = 0;
Console.WriteLine(1 / d); // compiles, runs, results in: Infinity
I can understand the compiler actively checking for division by zero constant and the DivideByZeroException at runtime but:
Why would using a double in a divide-by-zero return Infinity rather than throwing an exception? Is this by design or is it a bug?
Just for kicks, I did this in VB.NET as well, with "more consistent" results:
dim d as double = 0.0
Console.WriteLine(1 / d) ' compiles, runs, results in: Infinity
dim i as Integer = 0
Console.WriteLine(1 / i) ' compiles, runs, results in: Infinity
Console.WriteLine(1 / 0) ' compiles, runs, results in: Infinity
EDIT:
Based on kekekela's feedback I ran the following which resulted in infinity:
Console.WriteLine(1 / .0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001);
This test seems to corroborate the idea and a literal double of 0.0
is actually a very, very tiny fraction which will result in Infinity...
A double is a floating point number and not an exact value, so what you are really dividing by from the compiler's viewpoint is something approaching zero, but not exactly zero.
Because the "numeric" floating point is nothing of the kind. Floating point operations:
(see http://www.cs.uiuc.edu/class/fa07/cs498mjg/notes/floating-point.pdf for some examples)
The floating point is a construct to solve a specific problem, and gets used all over when it shouldn't be. I think they're pretty awful, but that is subjective.
This likely has something to do with the fact that IEEE standard floating point and double-precision floating point numbers have a specified "infinity" value. .NET is just exposing something that already exists, at the hardware level.
See kekekela's answer for why this makes sense, logically.
This is by design because the
double
type complies with IEEE 754, the standard for floating-point arithmetic. Check out the documentation for Double.NegativeInfinity and Double.PositiveInfinity.In a nutshell: the
double
type defines a value for infinity while theint
type doesn't. So in thedouble
case, the result of the calculation is a value that you can actually express in the given type since it's defined. In theint
case, there is no value for infinity and thus no way to return an accurate result. Hence the exception.VB.NET does things a little bit differently; integer division automatically results in a floating point value using the
/
operator. This is to allow developers to write, e.g., the expression1 / 2
, and have it evaluate to0.5
, which some would consider intuitive. If you want to see behavior consistent with C#, try this:Note the use of the integer division operator (
\
, not/
) above. I believe you'll get an exception (or a compile error--not sure which).Similarly, try this:
The above code will output
System.Double
.As for the point about imprecision, here's another way of looking at it. It isn't that the
double
type has no value for exactly zero (it does); rather, thedouble
type is not meant to provide mathematically exact results in the first place. (Certain values can be represented exactly, yes. But calculations give no promise of accuracy.) After all, the value of the mathematical expression1 / 0
is not defined (last I checked). But1 / x
approaches infinity as x approaches zero. So from this perspective if we cannot represent most fractionsn / m
exactly anyway, it makes sense to treat thex / 0
case as approximate and give the value it approaches--again, infinity is defined, at least.