How does Python allocate memory for large integers?
An int
type has a size of 28 bytes
and as I keep increasing the value of the int
, the size increases in increments of 4 bytes
.
Why
28 bytes
initially for any value as low as1
?Why increments of
4 bytes
?
PS: I am running Python 3.5.2 on a x86_64 (64 bit machine). Any pointers/resources/PEPs on how the (3.0+) interpreters work on such huge numbers is what I am looking for.
Code illustrating the sizes:
>>> a=1
>>> print(a.__sizeof__())
28
>>> a=1024
>>> print(a.__sizeof__())
28
>>> a=1024*1024*1024
>>> print(a.__sizeof__())
32
>>> a=1024*1024*1024*1024
>>> print(a.__sizeof__())
32
>>> a=1024*1024*1024*1024*1024*1024
>>> a
1152921504606846976
>>> print(a.__sizeof__())
36
I believe @bgusach answered that completely; Python uses
C
structs to represent objects in the Python world, any objects includingint
s:PyObject_VAR_HEAD
is a macro that when expanded adds another field in the struct (fieldPyVarObject
which is specifically used for objects that have some notion of length) and,ob_digits
is an array holding the value for the number. Boiler-plate in size comes from that struct, for small and large Python numbers.Because, when a larger number is created, the size (in bytes) is a multiple of the
sizeof(digit)
; you can see that in_PyLong_New
where the allocation of memory for a newlongobject
is performed withPyObject_MALLOC
:offsetof(PyLongObject, ob_digit)
is the 'boiler-plate' (in bytes) for the long object that isn't related with holding its value.digit
is defined in the header file holding thestruct _longobject
as atypedef
foruint32
:and
sizeof(uint32_t)
is4
bytes. That's the amount by which you'll see the size in bytes increase when thesize
argument to_PyLong_New
increases.Of course, this is just how
C
Python has chosen to implement it. It is an implementation detail and as such you wont find much information in PEPs. The python-dev mailing list would hold implementation discussions if you can find the corresponding thread :-).Either way, you might find differing behavior in other popular implementations, so don't take this one for granted.
It's actually easy. Python's
int
is not the kind of primitive you may be used to from other languages, but a full fledged object, with its methods and all the stuff. That is where the overhead comes from.Then, you have the payload itself, the integer that is being represented. And there is no limit for that, except your memory.
The size of a Python's
int
is what it needs to represent the number plus a little overhead.If you want to read further, take a look at the relevant part of the documentation: