The value of spark.yarn.executor.memoryOverhead
in a Spark job with YARN should be allocated to App or just the max value?
相关问题
- How to maintain order of key-value in DataFrame sa
- Spark on Yarn Container Failure
- In Spark Streaming how to process old data and del
- Filter from Cassandra table by RDD values
- Spark 2.1 cannot write Vector field on CSV
相关文章
- Livy Server: return a dataframe as JSON?
- SQL query Frequency Distribution matrix for produc
- How to filter rows for a specific aggregate with s
- How to name file when saveAsTextFile in spark?
- Spark save(write) parquet only one file
- Could you give me any clue Why 'Cannot call me
- Why does the Spark DataFrame conversion to RDD req
- How do I enable partition pruning in spark
Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames
controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.
The value of the
spark.yarn.executor.memoryOverhead
property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executorMemory * 0.10, with minimum of 384).The executors will use a memory allocation based on the property of
spark.executor.memory
plus an overhead defined byspark.yarn.executor.memoryOverhead
A detailed description of spark executor memory layout with yarn can be found here