I'm looking to generate, from a large Python codebase, a summary of heap usage or memory allocations over the course of a function's run.
I'm familiar with heapy, and it's served me well for taking "snapshots" of the heap at particular points in my code, but I've found it difficult to generate a "memory-over-time" summary with it. I've also played with line_profiler, but that works with run time, not memory.
My fallback right now is Valgrind with massif, but that lacks a lot of the contextual Python information that both Heapy and line_profiler give. Is there some sort of combination of the latter two that give a sense of memory usage or heap growth over the execution span of a Python program?
I would use
sys.settrace
at program startup to register a custom tracer function. The custom_trace_function will be called for each line of code. Then you can use that function to store information gathered by heapy or meliae in a file for later processing.Here is a very simple example which logs the output of hpy.heap() each second to a plain text file:
You might be interested by memory_profiler.