Tables not found in Spark SQL after migrating from

2019-08-16 02:05发布

I have Spark jobs on EMR, and EMR is configured to use the Glue catalog for Hive and Spark metadata.

I create Hive external tables, and they appear in the Glue catalog, and my Spark jobs can reference them in Spark SQL like spark.sql("select * from hive_table ...")

Now, when I try to run the same code in a Glue job, it fails with "table not found" error. It looks like Glue jobs are not using the Glue catalog for Spark SQL the same way that Spark SQL would running in EMR.

I can work around this by using Glue APIs and registering dataframes as temp views:

create_dynamic_frame_from_catalog(...).toDF().createOrReplaceTempView(...)

but is there a way to do this automatically?

2条回答
姐就是有狂的资本
2楼-- · 2019-08-16 02:27

Instead of using SparkContext.getOrCreate(), you should use SparkSession.builder().enableHiveSupport().getOrCreate(), with enableHiveSupport() being the important part that's missing. I think what's probably happening is that your Spark job is not actually creating your tables in Glue but rather is creating them in Spark's embedded Hive metastore, since you have not enabled Hive support.

查看更多
相关推荐>>
3楼-- · 2019-08-16 02:41

This was a much awaited feature request (to use Glue Data Catalog with Glue ETL jobs) which has been release recently. When you create a new job, you'll find the following option

Use Glue data catalog as the Hive metastore

You may also enable it for an existing job by editing the job and adding --enable-glue-datacatalog in the job parameters providing no value

查看更多
登录 后发表回答