I think that this is not possible because Int32
has 1 bit sign and have 31 bit of numeric information and Int16 has 1 bit sign and 15 bit of numeric information and this leads to having 2 bit signs and 30 bits of information.
If this is true then I cannot have one Int32
into two Int16
. Is this true?
Thanks in advance.
EXTRA INFORMATION: Using Vb.Net but I think that I can translate without problems a C# answer.
What initially I wanted to do was to convert one UInt32
to two UInt16
as this is for a library that interacts with WORD based machines. Then I realized that Uint
is not CLS compliant and tried to do the same with Int32
and Int16
.
EVEN WORSE: Doing a = CType(c And &HFFFF, Int16);
throws OverflowException
. I expected that statement being the same as a = (Int16)(c & 0xffff);
(which does not throw an exception).
You might also be interested in StructLayout or unions if you're using c++.
If you look at the bit representation, then you are correct.
You can do this with unsigned ints though, as they don't have the sign bit.
Why not? Lets reduce the number of bits for the sake of simplicity : let's say we have 8 bits of which the left bit is a minus bit.
You can store it in 2 times 4 bits
I don't see why it wouldn't be possible, you twice have 8 bits info
EDIT : For the sake of simplicity, I didn't just reduce the bits, but also don't use 2-complementmethod. In my examples, the left bit denotes minus, the rest is to be interpreted as a normal positive binary number
Int32 num = 70000;
You can use StructLayout to do this:
Using this, you can get the full Value as int , and low part, hight part as short.
something like:
Unsafe code in C#, overflow doesn't occur, detects endianness automatically: