int y = -2147483648;
int z = unchecked(y / -1);
The second line causes an OverflowException
. Shouldn't unchecked
prevent this?
For example:
int y = -2147483648;
int z = unchecked(y * 2);
doesn't cause an exception.
int y = -2147483648;
int z = unchecked(y / -1);
The second line causes an OverflowException
. Shouldn't unchecked
prevent this?
For example:
int y = -2147483648;
int z = unchecked(y * 2);
doesn't cause an exception.
Section 7.72 (Division Operator) of the C# 4 specs states:
So the fact that this throws an exception in an unchecked context is not in fact a bug, since the behavior is implementation-defined.
According to section 7.8.2 of the C# Language Specification 5.0 we have the following case:
This is not an exception that the C# compiler or the jitter have any control over. It is specific to Intel/AMD processors, the CPU generates a #DE trap (Divide Error) when the IDIV instruction fails. The operating system handles the processor trap and reflects it back into the process with a STATUS_INTEGER_OVERFLOW exception. The CLR dutifully translates it to a matching managed exception.
The Intel Processor Manual is not exactly a gold mine of information about it:
In English: the result of the signed division is +2147483648, not representable in an int since it is Int32.MaxValue + 1. Otherwise an inevitable side-effect of the way the processor represents negative values, it uses two's-complement encoding. Which produces a single value to represent 0, leaving an odd number of other possible encodings to represent negative and positive values. There is one more for negative values. Same kind of overflow as
-Int32.MinValue
, except that the processor doesn't trap on the NEG instruction and just produces a garbage result.The C# language is of course not the only one with this problem. The C# Language Spec makes it implementation defined behavior (chapter 7.8.2) by noting the special behavior. No other reasonable thing they could do with it, generating the code to handle the exception surely was considered too unpractical, producing undiagnosably slow code. Not the C# way.
The C and C++ language specs up the ante by making it undefined behavior. That can truly get ugly, like a program compiled with the gcc or g++ compiler, typically with the MinGW toolchain. Which has imperfect runtime support for SEH, it swallows the exception and allows the processor to restart the division instruction. The program hangs, burning 100% core with the processor constantly generating #DE traps. Turning division into the legendary Halt and Catch Fire instruction :)