I have a spark application which will successfully connect to hive and query on hive tables using spark engine.
To build this, I just added hive-site.xml
to classpath of the application and spark will read the hive-site.xml
to connect to its metastore. This method was suggested in spark's mailing list.
So far so good. Now I want to connect to two hive stores and I don't think adding another hive-site.xml
to my classpath will be helpful. I referred quite a few articles and spark mailing lists but could not find anyone doing this.
Can someone suggest how I can achieve this?
Thanks.
Docs referred:
This doesn't seem to be possible in the current version of Spark. Reading the HiveContext code in the Spark Repo it appears that
hive.metastore.uris
is something that is configurable for many Metastores, but it appears to be used only for redundancy across the same metastore, not totally different metastores.More information here https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin
But you will probably have to aggregate the data somewhere in order to work on it in unison. Or you could create multiple Spark Contexts for each store.
You could try configuring the
hive.metastore.uris
for multiple different metastores, but it probably won't work. If you do decide to create multiple Spark contexts for each store than make sure you setspark.driver.allowMultipleContexts
but this is generally discouraged and may lead to unexpected results.I think this is possible by making use of Spark SQL capability of connecting and reading data from remote databases using JDBC.
After an exhaustive R & D, I was successfully able to connect to two different hive environments using JDBC and load the hive tables as DataFrames into Spark for further processing.
Environment details
hadoop-2.6.0
apache-hive-2.0.0-bin
spark-1.3.1-bin-hadoop2.6
Code Sample HiveMultiEnvironment.scala
Other parameters can also be set during load using SqlContext such as setting partitionColumn. Details found under 'JDBC To Other Databases' section in Spark reference doc: https://spark.apache.org/docs/1.3.0/sql-programming-guide.html
Build path from Eclipse:
What I Haven't Tried
Use of HiveContext for Environment 1 and SqlContext for environment 2
Hope this will be useful.