Spark 1.6 onwards as per the official doc we cannot add specific hive partitions to DataFrame
Till Spark 1.5 the following used to work and the dataframe would have entity column and the data, as shown below -
DataFrame df = hiveContext.read().format("orc").load("path/to/table/entity=xyz")
However, this would not work in Spark 1.6.
If I give base path like the following it does not contain entity column which I want in DataFrame, as shown below -
DataFrame df = hiveContext.read().format("orc").load("path/to/table/")
How do I load specific hive partition in a dataframe? What was the driver behind removing this feature?
I believe it was efficient. Is there an alternative to achive that in Spark 1.6 ?
As per my understanding, Spark 1.6 loads all partitions and if I filter for specific partitions it is not efficient, it hits memory and throws GC(Garbage Collection) errors because of thousands of partitions get loaded into memory and not the specific partition.
Please guide. Thanks in advance.