Memory profiling on Google Cloud Dataflow

2019-01-26 22:59发布

问题:

What would be the best way to debug memory issues of a dataflow job?

My job was failing with a GC OOM error, but when I profile it locally I cannot reproduce the exact scenarios and data volumes.

I'm running it now on 'n1-highmem-4' machines, and I don't see the error anymore, but the job is very slow, so obviously using machine with more RAM is not the solution :)

Thanks for any advice, G

回答1:

Please use the option --dumpHeapOnOOM and --saveHeapDumpsToGcsPath (see docs).

This will only help if one of your workers actually OOMs. Additionally you can try running jmap -dump PID on the harness process on the worker to obtain a heap dump at runtime if it's not OOMing but if you observe high memory usage nevertheless.