I'm developing an application in Python on Ubuntu and I'm running external binaries from within python using subprocess. Since these binaries are generated at run time and can go rogue, I need to keep a strict tab on the amount of memory footprint and runtime of these binaries. Is there someway I can limit or monitor the memory usage of these binary programs at runtime? I would really hate to use something like "ps" in subprocess for this purpose.
问题:
回答1:
Having a PID number of your subprocess you can read all info from proc file-system. Use:
/proc/[PID]/smaps (since Linux 2.6.14) This file shows memory consumption for each of the process's mappings. For each of mappings there is a series of lines as follows:
or
/proc/[PID]/statm Provides information about memory usage, measured in pages.
Alternatively you can limit resources which subprocess can aquire with :
subprocess.Popen('ulimit -v 1024; ls', shell=True)
When given virtual memory limit is reached process fails with out of memory.
回答2:
You can use Python's resource module to set limits before spawning your subprocess.
For monitoring, resource.getrusage() will give you summarized information over all your subprocesses; if you want to see per-subprocess information, you can do the /proc trick in that other comment (non-portable but effective), or layer a Python program in between every subprocess and figure out some communication (portable, ugly, mildly effective).