I'm running a Spark job on YARN and would like to get the YARN container ID (as part of a requirement to generate unique IDs across a set of Spark jobs). I can see the Container.getId() method to get the ContainerId but no idea how to get a reference to the current running container from YARN. Is this even possible? How does a YARN container get it's own information?
相关问题
- How to maintain order of key-value in DataFrame sa
- Spark on Yarn Container Failure
- In Spark Streaming how to process old data and del
- Filter from Cassandra table by RDD values
- Spark 2.1 cannot write Vector field on CSV
相关文章
- Java写文件至HDFS失败
- Livy Server: return a dataframe as JSON?
- mapreduce count example
- SQL query Frequency Distribution matrix for produc
- How to filter rows for a specific aggregate with s
- How to name file when saveAsTextFile in spark?
- Spark save(write) parquet only one file
- Could you give me any clue Why 'Cannot call me
here below description how spark store the Container ID
Spark hide the container id and expose the executor id per application/job so if you are planning to maintain the unique id per spark job, my suggestion to use application id which spark gives you then you can add your some string to make unique for you
below spark code from "YarnAllocator.scala"
private[yarn] val executorIdToContainer = new HashMap[String, Container]
YARN will export all of the environment variables listed here: https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java#L117
So you should be able to access it like:
The only way that I could get something was to use the logging directory. The following works in a spark shell.