In the spark docs it's clear how to create parquet files from RDD
of your own case classes; (from the docs)
val people: RDD[Person] = ??? // An RDD of case class objects, from the previous example.
// The RDD is implicitly converted to a SchemaRDD by createSchemaRDD, allowing it to be stored using Parquet.
people.saveAsParquetFile("people.parquet")
But not clear how to convert back, really we want a method readParquetFile
where we can do:
val people: RDD[Person] = sc.readParquestFile[Person](path)
where those values of the case class are defined are those which are read by the method.
The best solution I've come up with that requires the least amount of copy and pasting for new classes is as follows (I'd still like to see another solution though)
First you have to define your case class, and a (partially) reusable factory method
Some boiler plate which will already be available
The magic
Example use
See also:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Convert-SchemaRDD-back-to-RDD-td9071.html
Though I failed to find any example or documentation by following the JIRA link.
Very crufty attempt. Very unconvinced this will have decent performance. Surely there must a macro-based alternative...
An easy way is to provide your own converter
(Row) => CaseClass
. This is a bit more manual, but if you know what you are reading it should be quite straightforward.Here is an example:
there is a simple method to convert schema rdd to rdd using pyspark in Spark 1.2.1.
there must be similar approach using scala.