I am using Hive Metastore in EMR. I am able to query the table manually through HiveSQL .
But When i use the same table in Spark Job, it says Input path does not exist: s3://
Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: s3://....
I have deleted my above partition path in s3://.. but it still works in my Hive without Dropping Partition at table level. but its not working in pyspark anyways
Here is my full code
from pyspark import SparkContext, HiveContext
from pyspark import SQLContext
from pyspark.sql import SparkSession
sc = SparkContext(appName = "test")
sqlContext = SQLContext(sparkContext=sc)
sqlContext.sql("select count(*) from logan_test.salary_csv").show()
print("done..")
I submitted my job as below to use hive catalog tables.
spark-submit test.py --files /usr/lib/hive/conf/hive-site.xml
I have had a similar error with HDFS where the Metastore kept a partition for the table, but the directory was missing
Check s3... If it is missing, or you deleted it, you need to run
MSCK REPAIR TABLE
from Hive. Sometimes this doesn't work, and you actually do need aDROP PARTITION
That property is false by default, but you set configuration properties by passing a
SparkConf
object toSparkContext
Or, the Spark 2 way is using a SparkSession.