Wouldn't it have made more sense to make long 64-bit and reserve long long until 128-bit numbers become a reality?
相关问题
- Multiple sockets for clients to connect to
- What is the best way to do a search in a large fil
- glDrawElements only draws half a quad
- Index of single bit in long integer (in C) [duplic
- Equivalent of std::pair in C
Ever since the days of the first C compiler for a general-purpose reprogrammable microcomputer, it has often been necessary for code to make use of types that held exactly 8, 16, or 32 bits, but until 1999 the Standard didn't explicitly provide any way for programs to specify that. On the other hand, nearly all compilers for 8-bit, 16-bit, and 32-bit microcomputers define "char" as 8 bits, "short" as 16 bits, and "long" as 32 bits. The only difference among them is whether "int" is 16 bits or 32.
While a 32-bit or larger CPU could use "int" as a 32-bit type, leaving "long" available as a 64-bit type, there is a substantial corpus of code which expects that "long" will be 32 bits. While the C Standard added "fixed-sized" types in 1999, there are other places in the Standard which still use "int" and "long", such as "printf". While C99 added macros to supply the proper format specifiers for fixed-sized integer types, there is a substantial corpus of code which expects that "%ld" is a valid format specifier for int32_t, since it will work on just about any 8-bit, 16-bit, or 32-bit platform.
Whether it makes more sense to have "long" be 32 bits, out of respect for an existing code base going back decades, or 64 bits, so as to avoid the need for the more verbose "long long" or "int64_t" to identify the 64-bit types is probably a judgment call, but given that new code should probably favor the use of specified-size types when practical, I'm not sure I see a compelling advantage to making "long" 64 bits unless "int" is also 64 bits (which will create even bigger problems with existing code).
The c standard have NOT specified the bit-length of primitive data type, but only the least bit-length of them. So compilers can have options on the bit-length of primitive data types. On deciding the bit-length of each primitive data type, the compiler designer should consider the several factors, including the computer architecture.
here is some references: http://en.wikipedia.org/wiki/C_syntax#Primitive_data_types
C99 N1256 standard draft
Sizes of
long
andlong long
are implementation defined, all we know are:5.2.4.2.1 Sizes of integer types
<limits.h>
gives the minimum sizes:6.2.5 Types then says:
and 6.3.1.1 Boolean, characters, and integers determines the relative conversion ranks:
For historical reasons. For a long time (pun intended), "int" meant 16-bit; hence "long" as 32-bit. Of course, times changed. Hence "long long" :)
PS:
GCC (and others) currently support 128 bit integers as "(u)int128_t".
PPS:
Here's a discussion of why the folks at GCC made the decisions they did:
http://www.x86-64.org/pipermail/discuss/2005-August/006412.html
Yes, it does make sense, but Microsoft had their own reasons for defining "long" as 32-bits.
As far as I know, of all the mainstream systems right now, Windows is the only OS where "long" is 32-bits. On Unix and Linux, it's 64-bit.
All compilers for Windows will compile "long" to 32-bits on Windows to maintain compatibility with Microsoft.
For this reason, I avoid using "int" and "long". Occasionally I'll use "int" for error codes and booleans (in C), but I never use them for any code that is dependent on the size of the type.