I am trying to use the YARN REST API to submit the spark-submit jobs, which I generally run via command line.
My command line spark-submit looks like this
JAVA_HOME=/usr/local/java7/ HADOOP_CONF_DIR=/etc/hadoop/conf /usr/local/spark-1.5/bin/spark-submit \
--driver-class-path "/etc/hadoop/conf" \
--class MySparkJob \
--master yarn-cluster \
--conf "spark.executor.extraClassPath=/usr/local/hadoop/client/hadoop-*" \
--conf "spark.driver.extraClassPath=/usr/local/hadoop/client/hadoop-*" \
spark-job.jar --retry false --counter 10
Reading through the YARN REST API documentation https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application, I tried to create the JSON payload to POST which looks like
{
"am-container-spec": {
"commands": {
"command": "JAVA_HOME=/usr/local/java7/ HADOOP_CONF_DIR=/etc/hadoop/conf org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --jar spark-job.jar --class MySparkJob --arg --retry --arg false --arg --counter --arg 10"
},
"local-resources": {
"entry": [
{
"key": "spark-job.jar",
"value": {
"resource": "hdfs:///spark-job.jar",
"size": 3214567,
"timestamp": 1452408423000,
"type": "FILE",
"visibility": "APPLICATION"
}
}
]
}
},
"application-id": "application_11111111111111_0001",
"application-name": "test",
"application-type": "Spark"
}
The problem I see is that, the hadoop configs directory is previously local to the machine I was running jobs from, now that I submit job via REST API and it runs directly on the RM, I am not sure how to provide these details ?