Why are C++ int and long types both 4 bytes?

2019-01-14 02:58发布

问题:

Many sources, including Microsoft, reference both the int and long type as being 4 bytes and having a range of (signed) -2,147,483,648 to 2,147,483,647. What is the point of having a long primitive type if it doesn't actually provide a larger range of values?

回答1:

The only things guaranteed about integer types are:

  1. sizeof(char) == 1
  2. sizeof(char) <= sizeof(short)
  3. sizeof(short) <= sizeof(int)
  4. sizeof(int) <= sizeof(long)
  5. sizeof(long) <= sizeof(long long)
  6. sizeof(char) * CHAR_BIT >= 8
  7. sizeof(short) * CHAR_BIT >= 16
  8. sizeof(int) * CHAR_BIT >= 16
  9. sizeof(long) * CHAR_BIT >= 32
  10. sizeof(long long) * CHAR_BIT >= 64

The other things are implementation defined. Thanks to (4), both long and int can have the same size, but it must be at least 32 bits (thanks to (9)).



回答2:

C++ standard only specifies that long is at least as big as int, so there's nothing criminal in the scenario when it is exactly as big: it's totally implementation-defined. On different platforms, sizes may matter, for example I'm having int of size 4 and long of size 8 at the moment on my Linux machine.



回答3:

As other have pointed out, the assumption underlying the question is only partially true, i.e. doesn't hold for some platforms. If you really want to understand how we came to the current situation, the Long Road to 64 Bits by J. Mashey will give you a good view on the various forces in presence and how they interacted.

Quick summary, C started with char (8 bits) and int (16 bits). Then one added short (16 bits) and long (32 bits), while int could be 16 or 32 bits depending on what was natural on the platform and the pressure of backward compatibility. When 64 bits came, long long was added as a 64 bits type and there was some adjustments in the smaller types on 64 bits platforms. Things stabilized with a 32 bits int, but long kept various definitions, some 64 bits platforms having a 64 bits long, while other (perhaps only Windows?) kept long to 32 bits, probably due to different pressure of backward compatibility (Unix had a long history of assuming long as the same size as a pointer, Windows had more remnant in its API of the time when int was 16 bits and thus long was the only 32 bits type). BTW typedefs (intXX_t, intptr_t) were added to help to make the intention clearer, at the risk for the intXX_t family to force constant size were none is really needed.



回答4:

No, it's only 4 bytes on certain platforms. The C++ standard leaves the size as being implementation-defined.

On other platforms it may be a different size.



回答5:

It doesn't necesarily have to be larger. It's just the way the GCC, for example, usually has long to be defined as 8 bytes when I've used it on my machine. The standard's wording usually says that these types 'Need to be at least X size' (for an example, check out the finally-standardized long long in C++11.

Essentially, anyone's free to do what they want as long as it meets the requirements. According to the standard, someone could make long long 256 bits, and it'd be perfectly legal.



回答6:

The C++ Language Specification simply states that the size of a long must be at least the size of an int.

It used to be standard to have int = 2 bytes and long = 4 bytes. For some reason int grew up and long stayed the same (on Windows compilers at least). I can only speculate that long was kept the same for reasons of backwards compatibility...



回答7:

No one has answered your actual question, except maybe AProgrammer.

The C/C++ standard is defined as Griwes described. This allows the C and the C++ language to be implemented where the compiler vendor can define the sizes most convenient to the computer architecture. For a while, (for Windows 3.1 and before, that is, before Windows 95), C code for windows had a 16 bit int whereas many UNIX platforms, such as Solaris, HPUX, and AIX had a 32 bit int.

However, modern microcomputers (since the 386), have full 32 bit registers, and addressing memory aligned to 32 bits is much faster then accessing 16 bit increments. Thus code is much more efficient with a 32 bit int, especially for int arrays, than it might be with a 16 bit int.

To simulate a 16 bit int in a 32 bit register, you also have to overflow at the 16th bit, instead of the 32nd. So it's just easier with a 32 bit int, even if you only have 32 bits available for long.



回答8:

I think it's implementation dependent. They can vary, but it is up to the vendor to supply them. However, some vendors simply make it so that they are syntactically "supported" (ie: you can put them in your code and they will compile, but don't differ). Every so often, you'll encounter a language feature like this.