Total memory used by Python process?

2018-12-31 17:47发布

Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.

12条回答
唯独是你
2楼-- · 2018-12-31 18:00

Current memory usage of the current process on Linux, for Python 2, Python 3, and pypy, without any imports:

def getCurrentMemoryUsage():
    ''' Memory usage in kB '''

    with open('/proc/self/status') as f:
        memusage = f.read().split('VmRSS:')[1].split('\n')[0][:-3]

    return int(memusage.strip())

Tested on Linux 4.4 and 4.9, but even an early Linux version should work.

Looking in man proc and searching for the info on the /proc/$PID/status file, it mentions minimum versions for some fields (like Linux 2.6.10 for "VmPTE"), but the "VmRSS" field (which I use here) has no such mention. Therefore I assume it has been in there since an early version.

查看更多
泛滥B
3楼-- · 2018-12-31 18:02

For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage() function from the standard library module resource. The resulting object has the attribute ru_maxrss, which gives peak memory usage for the calling process:

>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)

The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2) describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status, which is in kilobytes.

The getrusage() function can also be given resource.RUSAGE_CHILDREN to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH for total (self and child) process usage.

resource is a standard library module.

If you only care about Linux, you can just check the /proc/self/status file as described in a similar question.

查看更多
墨雨无痕
4楼-- · 2018-12-31 18:07

On Windows, you can use WMI (home page, cheeseshop):


def memory():
    import os
    from wmi import WMI
    w = WMI('.')
    result = w.query("SELECT WorkingSet FROM Win32_PerfRawData_PerfProc_Process WHERE IDProcess=%d" % os.getpid())
    return int(result[0].WorkingSet)

On Linux (from python cookbook http://code.activestate.com/recipes/286222/:

import os
_proc_status = '/proc/%d/status' % os.getpid()

_scale = {'kB': 1024.0, 'mB': 1024.0*1024.0,
          'KB': 1024.0, 'MB': 1024.0*1024.0}

def _VmB(VmKey):
    '''Private.
    '''
    global _proc_status, _scale
     # get pseudo file  /proc/<pid>/status
    try:
        t = open(_proc_status)
        v = t.read()
        t.close()
    except:
        return 0.0  # non-Linux?
     # get VmKey line e.g. 'VmRSS:  9999  kB\n ...'
    i = v.index(VmKey)
    v = v[i:].split(None, 3)  # whitespace
    if len(v) < 3:
        return 0.0  # invalid format?
     # convert Vm value to bytes
    return float(v[1]) * _scale[v[2]]


def memory(since=0.0):
    '''Return memory usage in bytes.
    '''
    return _VmB('VmSize:') - since


def resident(since=0.0):
    '''Return resident memory usage in bytes.
    '''
    return _VmB('VmRSS:') - since


def stacksize(since=0.0):
    '''Return stack size in bytes.
    '''
    return _VmB('VmStk:') - since
查看更多
心情的温度
5楼-- · 2018-12-31 18:08
import os, win32api, win32con, win32process
han = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION|win32con.PROCESS_VM_READ, 0, os.getpid())
process_memory = int(win32process.GetProcessMemoryInfo(han)['WorkingSetSize'])
查看更多
不流泪的眼
6楼-- · 2018-12-31 18:08

Even easier to use than /proc/self/status: /proc/self/statm. It's just a space delimited list of several statistics. I haven't been able to tell if both files are always present.

/proc/[pid]/statm

Provides information about memory usage, measured in pages. The columns are:

  • size (1) total program size (same as VmSize in /proc/[pid]/status)
  • resident (2) resident set size (same as VmRSS in /proc/[pid]/status)
  • shared (3) number of resident shared pages (i.e., backed by a file) (same as RssFile+RssShmem in /proc/[pid]/status)
  • text (4) text (code)
  • lib (5) library (unused since Linux 2.6; always 0)
  • data (6) data + stack
  • dt (7) dirty pages (unused since Linux 2.6; always 0)

Here's a simple example:

from pathlib import Path
from resource import getpagesize


def get_resident_set_size():
    # Columns are: size resident shared text lib data dt
    statm = Path('/proc/self/statm').read_text()
    fields = statm.split()
    return int(fields[1]) * getpagesize()


data = []
start_memory = get_resident_set_size()
for _ in range(10):
    data.append('X' * 100000)
    print(get_resident_set_size() - start_memory)

That produces a list that looks something like this:

0
0
368640
368640
368640
638976
638976
909312
909312
909312

You can see that it jumps by about 300,000 bytes after roughly 3 allocations of 100,000 bytes.

查看更多
ら面具成の殇う
7楼-- · 2018-12-31 18:10

On unix, you can use the ps tool to monitor it:

$ ps u -p 1347 | awk '{sum=sum+$6}; END {print sum/1024}'

where 1347 is some process id. Also, the result is in MB.

查看更多
登录 后发表回答