I am struggling with step where I want to write each RDD partition to separate parquet file with its own directory. Example will be:
<root>
<entity=entity1>
<year=2015>
<week=45>
data_file.parquet
Advantage of this format is I can use this directly in SparkSQL as columns and I will not have to repeat this data in actual file. This would be good way to get to get to specific partition without storing separate partitioning metadata someplace else.
As a preceding step I have all the data loaded from large number of gzip files and partitioned based on the above key.
Possible way would be to get each partition as separate RDD and then write it though I couldn't find any good way of doing it.
Any help will be appreciated. By the way I am new to this stack.