SparkSQL - Read parquet file directly

2019-01-18 12:37发布

I am migrating from Impala to SparkSQL, using the following code to read a table:

my_data = sqlContext.read.parquet('hdfs://my_hdfs_path/my_db.db/my_table')

How do I invoke SparkSQL above, so it can return something like:

'select col_A, col_B from my_table'

2条回答
欢心
2楼-- · 2019-01-18 12:55

With plain SQL

JSON, ORC, Parquet and CSV files can be queried without creating the table on Spark DataFrame.

//This Spark 2.x code you can do the same on sqlContext as well
val spark: SparkSession = SparkSession.builder.master("set_the_master").getOrCreate

spark.sql("select col_A, col_B from parquet.`hdfs://my_hdfs_path/my_db.db/my_table`")
   .show()
查看更多
我想做一个坏孩纸
3楼-- · 2019-01-18 12:57

After creating a Dataframe from parquet file, you have to register it as a temp table to run sql queries on it.

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

val df = sqlContext.read.parquet("src/main/resources/peopleTwo.parquet")

df.printSchema

// after registering as a table you will be able to run sql queries
df.registerTempTable("people")

sqlContext.sql("select * from people").collect.foreach(println)
查看更多
登录 后发表回答