After reading through the documentation I do not understand how does Spark running on YARN account for Python memory consumption.
Does it count towards spark.executor.memory
, spark.executor.memoryOverhead
or where?
In particular I have a PySpark application with spark.executor.memory=25G
, spark.executor.cores=4
and I encounter frequent Container killed by YARN for exceeding memory limits. errors when running a map
on an RDD. It operates on a fairly large amount of complex Python objects so it is expected to take up some non-trivial amount of memory but not 25GB. How should I configure the different memory variables for use with heavy Python code?
I'd try to increase memory to spark.python.worker.memory
default (512m) because of heavy Python code and this property value does not count in spark.executor.memory
.
Amount of memory to use per python worker process during aggregation,
in the same format as JVM memory strings (e.g. 512m, 2g). If the
memory used during aggregation goes above this amount, it will spill
the data into disks. link
ExecutorMemoryOverhead calculation in Spark:
MEMORY_OVERHEAD_FRACTION = 0.10
MEMORY_OVERHEAD_MINIMUM = 384
val executorMemoryOverhead =
max(MEMORY_OVERHEAD_FRACTION * ${spark.executor.memory}, MEMORY_OVERHEAD_MINIMUM))
The property is spark.{yarn|mesos}.executor.memoryOverhead
for YARN and Mesos.
YARN kills the processes which are taking more memory than they requested which is sum of executorMemoryOverhead
and executorMemory
.
In given image python processes in worker uses
spark.python.worker.memory
, then
spark.yarn.executor.memoryOverhead
+ spark.executor.memory
is
specific JVM.
Image credits
Additional resource Apache mailing thread