Programming languages (e.g. c, c++, and java) usually have several types for integer arithmetic:
signed
andunsigned
types- types of different size:
short
,int
,long
,long long
- types of guaranteed and non guaranteed (i.e.implementation dependent) size:
e.g.int32_t
vsint
(and I know thatint32_t
is not part of the language)
How would you summarize when one should use each of them?
Typically, you use
int
, unless you need to expand it because you need a larger range or you want to shrink it because you know the value only makes sense in a smaller range. It's incredibly rare that you would need to change due to memory considerations- the difference between them is miniscule.use shorter to save memory, longer to be able to represent larger numbers. If you don't have such requirements, consider what APIs you'll be sharing data with and set yourself up so you don't have to cast or convert too much.