EDIT:
I'm developing a Spark application that reads a data from the multiple structured schemas and I'm trying to aggregate the information from those schemas. My application runs well when I run it locally. But when I run it on a cluster, I'm having trouble with the configurations (most probably with hive-site.xml) or with the submit-command arguments. I've looked for the other related posts, but couldn't find the solution SPECIFIC to my scenario. I've mentioned what commands I tried and what errors I got in detail below. I'm new to Spark and I might be missing something trivial, but can provide more information to support my question.
Original Question:
I've been trying to run my spark application in a 6-node Hadoop cluster bundled with HDP2.3 components.
Here are component information that might be useful for you guys in suggesting the solutions:
Cluster information: 6-node cluster:
128GB RAM 24 core 8TB HDD
Components used in the application
HDP - 2.3
Spark - 1.3.1
$ hadoop version:
Hadoop 2.7.1.2.3.0.0-2557
Subversion git@github.com:hortonworks/hadoop.git -r 9f17d40a0f2046d217b2bff90ad6e2fc7e41f5e1
Compiled by jenkins on 2015-07-14T13:08Z
Compiled with protoc 2.5.0
From source with checksum 54f9bbb4492f92975e84e390599b881d
Scenario:
I'm trying to use the SparkContext and HiveContext in a way to take full advantage of the spark's real time query on it's data structure like dataframe. The dependencies used in my application are:
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.10</artifactId>
<version>1.4.0</version>
</dependency>
Below are the submit commands and the coresponding error logs that I'm getting:
Submit Command1:
spark-submit --class working.path.to.Main \
--master yarn \
--deploy-mode cluster \
--num-executors 17 \
--executor-cores 8 \
--executor-memory 25g \
--driver-memory 25g \
--num-executors 5 \
application-with-all-dependencies.jar
Error Log1:
User class threw exception: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Submit Command2:
spark-submit --class working.path.to.Main \
--master yarn \
--deploy-mode cluster \
--num-executors 17 \
--executor-cores 8 \
--executor-memory 25g \
--driver-memory 25g \
--num-executors 5 \
--files /etc/hive/conf/hive-site.xml \
application-with-all-dependencies.jar
Error Log2:
User class threw exception: java.lang.NumberFormatException: For input string: "5s"
Since I don't have the administrative permissions, I cannot modify the configuration. Well, I can contact to the IT engineer and make the changes, but I'm looking for the solution that involves less changes in the configuration files, if possible!
Configuration changes were suggested here.
Then I tried passing various jar files as arguments as suggested in other discussion forums.
Submit Command3:
spark-submit --class working.path.to.Main \
--master yarn \
--deploy-mode cluster \
--num-executors 17 \
--executor-cores 8 \
--executor-memory 25g \
--driver-memory 25g \
--num-executors 5 \
--jars /usr/hdp/2.3.0.0-2557/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/2.3.0.0-2557/spark/lib/datanucleus-core-3.2.10.jar,/usr/hdp/2.3.0.0-2557/spark/lib/datanucleus-rdbms-3.2.9.jar \
--files /etc/hive/conf/hive-site.xml \
application-with-all-dependencies.jar
Error Log3:
User class threw exception: java.lang.NumberFormatException: For input string: "5s"
I didn't understood what happened with the following command and couldn't analyze the error log.
Submit Command4:
spark-submit --class working.path.to.Main \
--master yarn \
--deploy-mode cluster \
--num-executors 17 \
--executor-cores 8 \
--executor-memory 25g \
--driver-memory 25g \
--num-executors 5 \
--jars /usr/hdp/2.3.0.0-2557/spark/lib/*.jar \
--files /etc/hive/conf/hive-site.xml \
application-with-all-dependencies.jar
Submit Log4:
Application application_1461686223085_0014 failed 2 times due to AM Container for appattempt_1461686223085_0014_000002 exited with exitCode: 10
For more detailed output, check application tracking page:http://cluster-host:XXXX/cluster/app/application_1461686223085_0014Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e10_1461686223085_0014_02_000001
Exit code: 10
Stack trace: ExitCodeException exitCode=10:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 10
Failing this attempt. Failing the application.
Any other possible options? Any kind of help will be highly appreciated. Please let me know if you need any other information.
Thank you.