I am migrating from Impala to SparkSQL, using the following code to read a table:
my_data = sqlContext.read.parquet('hdfs://my_hdfs_path/my_db.db/my_table')
How do I invoke SparkSQL above, so it can return something like:
'select col_A, col_B from my_table'
With plain SQL
JSON, ORC, Parquet and CSV files can be queried without creating the table on Spark DataFrame.
After creating a Dataframe from parquet file, you have to register it as a temp table to run
sql queries
on it.