I am a beginner at hive, something happened (can not find table) when I start spark job and read data from hive. I don't set hive-site.xml in $SPARK_HOME/conf ?
submit the spark job command is here
bin/spark-submit --master local[*] --driver-memory 8g --executor-memory 8g --class com.ctrip.ml.client.Client /root/GitLab/di-ml-tool/target/di-ml-tool-1.0-SNAPSHOT.jar
Just copy the
hive-site.xml
toconf
directory and it will work.hive-site.xml look like the following .
I believe it might depend on the distribution you're using. I encountered this problem recently, and this fixed the issue for me. I'm using HDP 2.3.2 so my copy of
hive-site.xml
in the Sparkconf
folder only contains this:On your hive distribution you have a template file that must convert into your own site file.
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration#AdminManualConfiguration-hive-site.xmlandhive-default.xml.template
So first of all you must create your own hive-site.xml file by copying the hive-default.xml.template and then you can use it from spark.
If you don't want to use the default file, you can use any of the previous configurations shown on the previous answers.