This command works with HiveQL:
insert overwrite directory '/data/home.csv' select * from testtable;
But with Spark SQL I'm getting an error with an org.apache.spark.sql.hive.HiveQl
stack trace:
java.lang.RuntimeException: Unsupported language features in query:
insert overwrite directory '/data/home.csv' select * from testtable
Please guide me to write export to CSV feature in Spark SQL.
The simplest way is to map over the DataFrame's RDD and use mkString:
As of Spark 1.5 (or even before that)
df.map(r=>r.mkString(","))
would do the same if you want CSV escaping you can use apache commons lang for that. e.g. here's the code we're using