Python memory allocation error using subprocess.Po

2019-01-15 10:17发布

问题:

I am doing some bioinformatics work. I have a python script that at one point calls a program to do an expensive process (sequence alignment..uses a lot of computational power and memory). I call it using subprocess.Popen. When I run it on a testcase, it completes and finishes fine. However, when I run it on the full file, where it would have to do this multiple times for different sets of inputs, it dies. Subprocess throws:

OSError: [Errno 12] Cannot allocate memory

I found a few links here and here and here to similar problems, but I'm not sure that they apply in my case.

By default, the sequence aligner will try to request 51000M of memory. It doesn't always use that much, but it might. With the full input loaded and processed, that much is not available. However, capping the amount it requests or will attempt to use at a lower amount that might be available when running still gives me the same error. I've also tried running with shell=True and same thing.

This has been bugging me for a few days now. Thanks for any help.

Edit: Expanding the traceback:

File "..../python2.6/subprocess.py", line 1037, in _execute_child
    self.pid=os.fork()
OSError: [Errno 12] Cannot allocate memory

throws the error.

Edit2: Running in python 2.6.4 on 64 bit ubuntu 10.4

回答1:

I feel really sorry for the OP. 6 years later and no one mentioned that this is a very common problem in Unix, and actually has nothing to do with python or bioinformatics. A call to os.fork() temporarily doubles the memory of the parent process (the memory of the parent process must be available to the child process), before throwing it all away to do an exec(). While this memory isn't always actually copied, the system must have enough memory to allow for it to be copied, and thus if you're parent process is using more than half of the system memory and you subprocess out even "wc -l", you're going to run into a memory error.

The solution is to use posix_spawn, or create all your subprocesses at the beginning of the script, while memory consumption is low, then use them later on after the parent process has done it's memory-intensive thing.

A google search using the keyworks "os.fork" and "memory" will show several Stack Overflow posts on the topic that can further explain what's going on :)



回答2:

This doesn't have anything to do with Python or the subprocess module. subprocess.Popen is merely reporting to you the error that it is receiving from the operating system. (What operating system are you using, by the way?) From man 2 fork on Linux:

ENOMEM    fork()  failed  to  allocate  the  necessary  kernel  structures
          because memory is tight.

Are you calling subprocess.Popen multiple times? If so then I think the best you can do is make sure that the previous invocation of your process is terminated and reaped before the next invocation.



回答3:

Do you use subprocess.PIPE? I had problems and read about problems when it was used. Temporary files usually solved the problem.



回答4:

I'd run a 64 bit python on a 64 bit OS.

With 32 bit, you can only really get 3 GB of RAM before OS starts telling you no more.

Another alternative might be to use memory mapped files to open the file:

http://docs.python.org/library/mmap.html

Edit: Ah you're on 64 bit .. possibly the cause is that you're running out of RAM+Swap .. fix would be to increase the amount of swap maybe.