I can load data from hive server in the same cluster as where apache spark is installed. But how can i load data into dataframe from a remote hive server. Is the hive jdbc connector the only option to do so?
any suggestion how can i do this?
I can load data from hive server in the same cluster as where apache spark is installed. But how can i load data into dataframe from a remote hive server. Is the hive jdbc connector the only option to do so?
any suggestion how can i do this?
You can use
org.apache.spark.sql.hive.HiveContext
to perform SQL query over Hive tables.You can alternatively connect spark to the underlying HDFS directory where data is really stored. This will be more performant as the SQL query doesn't need parsed or the schema applied over the files.
If the cluster is an external one, you'll need to set
hive.metastore.uris