I am using the following code for creating / inserting data into a Hive table in Spark SQL:
val sc = SparkSession
.builder()
.appName("App")
.master("local[2]")
.config("spark.sql.warehouse.dir", "file:///tmp/spark-warehouse")
.enableHiveSupport()
.getOrCreate()
// actual code
result.createOrReplaceTempView("result")
result.write.format("parquet").partitionBy("year", "month").mode(SaveMode.Append).saveAsTable("tablename")
Which runs without errors. A result.show(10)
confirms this. The input files are csv on the local FS.
It creates parquet files under ./spark-warehouse/tablename/
and also creates the table in hive, using a correct create table statement.
git:(master) ✗ tree
.
└── tablename
├── _SUCCESS
└── year=2017
└── month=01
├── part-r-00013-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
├── part-r-00013-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
├── part-r-00018-abaea338-8ed3-4961-8598-cb2623a78ce1.snappy.parquet
└── part-r-00018-f42ce8ac-a42c-46c5-b188-598a23699ce8.snappy.parquet
hive:
hive> show create table tablename;
OK
CREATE TABLE `tablename`(
`col` array<string> COMMENT 'from deserializer')
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'path'='file:/Users/IdeaProjects/project/spark-warehouse/tablename')
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.SequenceFileInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
LOCATION
'file:/tmp/spark-warehouse/tablename'
TBLPROPERTIES (
'EXTERNAL'='FALSE',
'spark.sql.sources.provider'='parquet',
'spark.sql.sources.schema.numPartCols'='2',
'spark.sql.sources.schema.numParts'='1',
'spark.sql.sources.schema.part.0'='{
// fields
}',
'spark.sql.sources.schema.partCol.0'='year',
'spark.sql.sources.schema.partCol.1'='month',
'transient_lastDdlTime'='1488157476')
However, the table is empty:
hive> select count(*) from tablename;
...
OK
0
Time taken: 1.89 seconds, Fetched: 1 row(s)
Software used: Spark 2.1.0 with spark-sql and spark-hive_2.10, Hive 2.10 and a mysql metastore, Hadoop 2.70, macOS 10.12.3
Spark SQL partitioning is not compatible with Hive. This issue is documented by SPARK-14927.
As a recommended workaround you can create partitioned table with Hive, and only insert from Spark.
You should just need to run
MSCK REPAIR TABLE
when adding new partitions. See Hive docs: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RecoverPartitions(MSCKREPAIRTABLE)