I have a DataFrame
which I am trying to partitionBy
a column, sort it by that column and save in parquet format using the following command:
df.write().format("parquet")
.partitionBy("dynamic_col")
.sortBy("dynamic_col")
.save("test.parquet");
I get the following error:
reason: User class threw exception: org.apache.spark.sql.AnalysisException: 'save' does not support bucketing right now;
Is save(...)
not allowed?
Is only saveAsTable(...)
allowed which saves the data to Hive?
Any suggestions are helpful.
The problem is that
sortBy
is currently (Spark 2.3.1) supported only together with bucketing and bucketing needs to be used in combination withsaveAsTable
and also the bucket sorting column should not be part of partition columns.So you have two options:
Do not use
sortBy
:Use
sortBy
with bucketing and save it through the metastore usingsaveAsTable
:Try