I'm trying to understand difference between some data types and conversion.
public static void ExplicitTypeConversion2()
{
long longValue=long.MaxValue;
float floatValue = float.MaxValue;
int integerValue = (int) longValue;
int integerValue2 = (int)floatValue;
Console.WriteLine(integerValue);
Console.WriteLine(integerValue2);
}
When I run that code block, it outputs:
-1
-2147483648
I know that if the value you want to assign to an integer is bigger than that integer can keep, it returns the minimum value of integer (-2147483648).
As far as I know, long.MaxValue
is much bigger than the maximum value of an integer, but if I cast long.MaxValue
to int
, it returns -1.
What is the difference these two casting? I think the first one also suppose to return -2147483648 instead of -1.
The binary value of
long.MaxValue
is0111...111111
(a zero followed by 63 ones). When you cast toint
, you keep the lowest 32 bits111...11111
. This is-1
in decimal, asint
issigned
and two's complement applies.Let me explain:
The maximum value of long is 9,223,372,036,854,775,807 or 0x7FFFFFFFFFFFFFFF, thus 2's complement after reducing it into 0xFFFFFFFF will return 0x00000001 with minus sign bit, represented as -1 in decimal.
On the other side, the maximum value of float is 3.40282347E+38, thus casting it to int rounded the value to 3E+38 and using 2's complement after reducing it we get the hex value of 0x80000000 with minus sign bit, there is -2147483648 in decimal.
All of this case applies on signed integers, the result will be different on unsigned ones.
Reference:
https://msdn.microsoft.com/en-us/library/system.int64.maxvalue(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.single.maxvalue(v=vs.110).aspx
That's not a rule. The relevant rules are
For integer types in an
unchecked
context (ie the default):For float->int in an
unchecked
context:Chopping off 32 leading bits of off
0x7fffffffffffffff
gives0xffffffff
aka -1.You were never promised you would get
int.MinValue
for that out of range float->int cast, but you do anyway because it's easy to implement: x64's conversion instruction cvtss2si makes 0x80000000 for out of range results and similarly fistp (the old x87 conversion instruction used by the 32bit JIT) stores "the integer indefinite value" which is 0x80000000.