Python 2.7 MemoryError (64bit, Ubuntu) with plenty

2019-08-10 10:53发布

Python 2.7.10 (via conda) on Ubuntu 14.04 with 60GB RAM.

Working with large datasets in IPython notebooks. Getting MemoryErrors even though my reading of 'top' info is there are many GB left for the process to grow into. Here's a representative excerpt from 'top':

KiB Mem:  61836572 total, 61076424 used,   760148 free,     2788 buffers
KiB Swap:        0 total,        0 used,        0 free. 31823408 cached Mem

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                  
 81176 ubuntu    20   0 19.735g 0.017t   3848 R 100.9 30.3  12:48.89 /home/ubuntu/miniconda/envs/ds_notebook/bin/python -m ipykernel -f /run/user/1000/jupyter/kernel-4c9c1a51-da60-457b-b55e-faadf9ae06fd.json                                              
 80702 ubuntu    20   0 11.144g 9.295g      8 S   0.0 15.8   1:27.28 /home/ubuntu/miniconda/envs/ds_notebook/bin/python -m ipykernel -f /run/user/1000/jupyter/kernel-1027385c-f5e2-42d9-a5f0-7d837a39bdfe.json                                               

So those two processes are using just over 30GB address-space, and about 26GB resident space. (All other processes are tiny.)

My understanding (and many online sources) imply that 'cached' total of ~31GB is available to be pulled back (from caching) by programs when needed. (Output of free -m shows 30+GB in buffers/cache as well.)

And yet, Python is failing to allocate new structures of just a couple GB.

All the limits reported by the Python 'resource' module appear unset.

Why won't the Python process take (or be given) any more free address space and physical memory?

1条回答
Ridiculous、
2楼-- · 2019-08-10 11:49

Maybe not the answer, we need more investigation and information of what you do exactly and what is your config, but : You have less then one GB free (760Mo), but 31Giga cached. So, it's possible that there is no more memory to allocate because of memory fragmentation. I suppose that all cached memory is some memory left/release by some previous load/free of data. And maybe after some works, the fragmentation forbid allocation of a such big piece of memory. And with no swap, this is a true problem.

查看更多
登录 后发表回答