I am trying to join two large spark dataframes and keep running into this error:
Container killed by YARN for exceeding memory limits. 24 GB of 22 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
This seems like a common issue among spark users, but I can't seem to find any solid descriptions of what spark.yarn.executor.memoryOverheard is. In some cases it sounds like it's a kind of memory buffer before YARN kills the container (e.g. 10GB was requested, but YARN won't kill the container until it uses 10.2GB). In other cases it sounds like it's being used to to do some kind of data accounting tasks that are completely separate from the analysis that I want to perform. My questions are:
- What is the spark.yarn.executor.memoryOverhead being using for?
- What is the benefit of increasing this kind of memory instead of executor memory (or the number of executors)?
- In general, are there things steps I can take to reduce my spark.yarn.executor.memoryOverhead usage (e.g. particular datastructures, limiting the width of the dataframes, using fewer executors with more memory, etc)?
Overhead options are nicely explained in the configuration document:
This also includes user objects if you use one of the non-JVM guest languages (Python, R, etc...).