In the program listed below, the sizeof(int) and sizeof(long) are equal on my machine (both equal 4 bytes (or 32 bits)). A long, as far as I know, is 8 bytes. Is this correct? I have a 64-bit machine
#include <stdio.h>
#include <limits.h>
int main(void){
printf("sizeof(short) = %d\n", (int)sizeof(short));
printf("sizeof(int) = %d\n", (int)sizeof(int));
printf("sizeof(long) = %d\n", (int)sizeof(long));
printf("sizeof(float) = %d\n", (int)sizeof(float));
printf("sizeof(double) = %d\n", (int)sizeof(double));
printf("sizeof(long double) = %d\n", (int)sizeof(long double));
return 0;
}
A long, as far as I know, is 8 bytes. Is this correct?
No, this is not correct. The C and C++ specifications only state that long must be greater than or equal to 32 bits. int
can be smaller, but on many platforms, in C and C++, long
and int
are both 32 bits.
This is a very good reason to prefer fixed width integer types such as int64_t
if they're available to you and you're using C99 or a framework which provides you an equivalent type.
It depends on your ABI. If you are using Windows, Microsoft chose a LLP64 model, so long
and int
are both 32-bits, while long long
and pointers are 64-bits in 64-bit builds, for legacy reasons.
Most UNIX platforms chose a LP64 model, which makes int
32-bits, long
, long long
, and pointers 64-bits, for 64-bit builds.
But, as Reed Copsey notes, the standard only states minimum lengths, and relative lengths. And long
must be greater than or equal to the length of int
.
C is not Java nor C#. As others wrote, C spec states minimum lengths and relative lengths. Why? C was designed to be low-level and to compile and run on virtually any hardware ever made.
If a spec promises the programmer too much, its implementation can get tricky (and slow) when hardware does not support such a thing. Java and C# developers don't care too much, they like the convenience of their languages for higher-level jobs and don't hesitate to install large virtual machines that take care of fulfilling all the promises (or at least most of them).
C programmers want to have control over the code even on machine instruction level. Complete control over the hardware. Only this way can all hardware features be used, and maximum performance be reached. But you have to be careful with your assumptions about the hardware you are using. As always, with great power comes great responsibility.
Just to illustrate that: C does not assume 8-bit bytes. See CHAR_BIT
in <limits.h>
.
A related question is Size of an integer in C.
No, the standard defines the minimums which for long
is 4 bytes
and int
is 2 bytes
. As per the C99 draft standard section 5.2.4.2.1
Sizes of integer types paragraph 1 says:
[...]Their implementation-defined values shall be equal or greater in magnitude[...]
and for long
we have:
LONG_MIN -2147483647 // -(2^31 - 1)
LONG_MAX +2147483647 // 2^31 - 1
which is 4 bytes
. For completeness sake we have the following for int
:
INT_MIN -32767 // -(2^15 - 1)
INT_MAX +32767 // 2^15 - 1
which is 2 bytes
.