Python memory footprint vs. heap size

2020-02-23 04:24发布

问题:

I'm having some memory issues while using a python script to issue a large solr query. I'm using the solrpy library to interface with the solr server. The query returns approximately 80,000 records. Immediately after issuing the query the python memory footprint as viewed through top balloons to ~190MB.

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
8225 root      16   0  193m 189m 3272 S  0.0 11.2   0:11.31 python
...

At this point, the heap profile as viewed through heapy looks like this:

Partition of a set of 163934 objects. Total size = 14157888 bytes.   
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  80472  49  7401384  52   7401384  52 unicode
     1  44923  27  3315928  23  10717312  76 str
...

The unicode objects represent the unique identifiers of the records from the query. One thing to note is that the total heap size is only 14MB while python is occupying 190MB of physical memory. Once the variable storing the query results falls out of scope, the heap profile correctly reflects the garbage collection:

Partition of a set of 83586 objects. Total size = 6437744 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  44928  54  3316108  52   3316108  52 str

However, the memory footprint remains unchanged:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 8225 root      16   0  195m 192m 3432 S  0.0 11.3   0:13.46 python
...

Why is there such a large disparity between python's physical memory footprint and the size of the python heap?

回答1:

Python allocates Unicode objects from the C heap. So when you allocate many of them (along with other malloc blocks), then release most of them except for the very last one, C malloc will not return any memory to the operating system, as the C heap will only shrink on the end (not in the middle). Releasing the last Unicode object will release the block at the end of the C heap, which then allows malloc to return it all to the system.

On top of these problems, Python also maintains a pool of freed unicode objects, for faster allocation. So when the last Unicode object is freed, it isn't returned to malloc right away, making all the other pages stuck.



回答2:

CPython implementation only exceptionally free's allocated memory. This is a widely known bug, but it isn't receiving much attention by CPython developers. The recommended workaround is to "fork and die" the process that consumes lots RAM.



回答3:

What version of python are you using?
I am asking because older version of CPython did not release the memory and this was fixed in Python 2.5.



回答4:

I've implemented hruske's advice of "fork and die". I'm using os.fork() to execute the memory intensive section of code in a child process, then I let the child process exit. The parent process executes an os.waitpid() on the child so that only one thread is executing at a given time.

If anyone sees any pitfalls with this solution, please chime in.