How the size of int is decided?
Is it true that the size of int will depend on the processor. For 32-bit machine, it will be 32 bits and for 16-bit it's 16.
On my machine it's showing as 32 bits, although the machine has 64-bit processor and 64-bit Ubuntu installed.
The size of integer is basically depends upon the
architecture
of your system. Generally if you have a16-bit
machine then yourcompiler
will must support a int ofsize 2 byte.
If your system is of 32 bit,then the compiler must support for 4 byte for integer.In more details,
data bus
comes into picture yes,16-bit ,32-bit means nothing but thesize of data bus
in your system.Making
int
as wide as possible is not the best choice. (The choice is made by the ABI designers.)A 64bit architecture like x86-64 can efficiently operate on
int64_t
, so it's natural forlong
to be 64 bits. (Microsoft keptlong
as 32bit in their x86-64 ABI, for various portability reasons that make sense given the existing codebases and APIs. This is basically irrelevant because portable code that actually cares about type sizes should be usingint32_t
andint64_t
instead of making assumptions aboutint
andlong
.)Having
int
beint32_t
actually makes for better, more efficient code in many cases. An array ofint
use only 4B per element has only half the cache footprint of an array ofint64_t
. Also, specific to x86-64, 32bit operand-size is the default, so 64bit instructions need an extra code byte for a REX prefix. So code density is better with 32bit (or 8bit) integers than with 16 or 64bit. (See the x86 wiki for links to docs / guides / learning resources.)If a program requires 64bit integer types for correct operation, it won't use
int
. (Storing a pointer in anint
instead of anintptr_t
is a bug, and we shouldn't make the ABI worse to accommodate broken code like that.) A programmer writingint
probably expected a 32bit type, since most platforms work that way. (The standard of course only guarantees 16bits).Since there's no expectation that
int
will be 64bit in general (e.g. on 32bit platforms), and making it 64bit will make some programs slower (and almost no programs faster),int
is 32bit in most 64bit ABIs.Also, there needs to be a name for a 32bit integer type, for
int32_t
to be atypedef
for.It depends on the implementation. The only thing the C standard guarantees is that
and
and also some representable minimum values for the types, which imply that
char
is at least 8 bits long,int
is at least 16 bit, etc.So it must be decided by the implementation (compiler, OS, ...) and be documented.
It depends on the compiler.
For eg : Try an old turbo C compiler & it would give the size of 16 bits for an int because the word size (The size the processor could address with least effort) at the time of writing the compiler was 16.
It is depends on the primary compiler. if you using turbo c means the integer size is 2 bytes. else you are using the GNU gccompiler means the integer size is 4 bytes. it is depends on only implementation in C compiler.
Yes.
int
size depends on the compiler size. For 16 bit integer the range of the integer is between -32768 to 32767. For 32 & 64 bit compiler it will increase.