Weird result after assigning 2^31 to a signed and

2020-02-07 02:08发布

问题:

As the question title reads, assigning 2^31 to a signed and unsigned 32-bit integer variable gives an unexpected result.

Here is the short program (in C++), which I made to see what's going on:

#include <cstdio>
using namespace std;

int main()
{
    unsigned long long n = 1<<31;
    long long n2 = 1<<31;  // this works as expected
    printf("%llu\n",n);
    printf("%lld\n",n2);
    printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
    return 0;
}

Here's the output:

MyPC / # c++ test.cpp -o test
MyPC / # ./test
18446744071562067968      <- Should be 2^31 right?
-2147483648               <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8

I then added another function p(), to it:

void p()
{
  unsigned long long n = 1<<32;  // since n is 8 bytes, this should be legal for any integer from 32 to 63
  printf("%llu\n",n);
}

On compiling and running, this is what confused me even more:

MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test 
0
MyPC /

Why should the compiler complain about left shift count being too large? sizeof(unsigned long long) returns 8, so doesn't that mean 2^63-1 is the max value for that data type?

It struck me that maybe n*2 and n<<1, don't always behave in the same manner, so I tried this:

void s()
{
   unsigned long long n = 1;
   for(int a=0;a<63;a++) n = n*2;
   printf("%llu\n",n);
}

This gives the correct value of 2^63 as the output which is 9223372036854775808 (I verified it using python). But what is wrong with doing a left shit?

A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overflow)

-- Wikipedia

The value is not overflowing, only a minus sign will appear since the value is 2^63 (all bits are set).

I'm still unable to figure out what's going on with left shift, can anyone please explain this?

PS: This program was run on a 32-bit system running linux mint (if that helps)

回答1:

On this line:

unsigned long long n = 1<<32;

The problem is that the literal 1 is of type int - which is probably only 32 bits. Therefore the shift will push it out of bounds.

Just because you're storing into a larger datatype doesn't mean that everything in the expression is done at that larger size.

So to correct it, you need to either cast it up or make it an unsigned long long literal:

unsigned long long n = (unsigned long long)1 << 32;
unsigned long long n = 1ULL << 32;


回答2:

The reason 1 << 32 fails is because 1 doesn't have the right type (it is int). The compiler doesn't do any converting magic before the assignment itself actually happens, so 1 << 32 gets evaluated using int arithmic, giving a warning about an overflow.

Try using 1LL or 1ULL instead which respectively have the long long and unsigned long long type.



回答3:

The line

unsigned long long n = 1<<32;

results in an overflow, because the literal 1 is of type int, so 1 << 32 is also an int, which is 32 bits in most cases.

The line

unsigned long long n = 1<<31;

also overflows, for the same reason. Note that 1 is of type signed int, so it really only has 31 bits for the value and 1 bit for the sign. So when you shift 1 << 31, it overflows the value bits, resulting in -2147483648, which is then converted to an unsigned long long, which is 18446744071562067968. You can verify this in the debugger, if you inspect the variables and convert them.

So use

unsigned long long n = 1ULL << 31;