Irrespective of the spark executor core count, yarn container for the executor does not use more than 1 core.
相关问题
- How to maintain order of key-value in DataFrame sa
- Spark on Yarn Container Failure
- In Spark Streaming how to process old data and del
- Filter from Cassandra table by RDD values
- Spark 2.1 cannot write Vector field on CSV
相关文章
- Livy Server: return a dataframe as JSON?
- SQL query Frequency Distribution matrix for produc
- How to filter rows for a specific aggregate with s
- How to name file when saveAsTextFile in spark?
- Spark save(write) parquet only one file
- Could you give me any clue Why 'Cannot call me
- Why does the Spark DataFrame conversion to RDD req
- How do I enable partition pruning in spark
YARN is showing 1 core per executor irrespective of
spark.executor.cores
because by default DefaultResourceCalculator is used. It considers only memory.Use DominantResourceCalculator, It uses both cpu and memory.
Set below config in
capacity-scheduler.xml
More about DominantResourceCalculator