How to load specific Hive partition in DataFrame S

2019-02-14 06:32发布

问题:

Spark 1.6 onwards as per the official doc we cannot add specific hive partitions to DataFrame

Till Spark 1.5 the following used to work and the dataframe would have entity column and the data, as shown below -

DataFrame df = hiveContext.read().format("orc").load("path/to/table/entity=xyz")

However, this would not work in Spark 1.6.

If I give base path like the following it does not contain entity column which I want in DataFrame, as shown below -

DataFrame df = hiveContext.read().format("orc").load("path/to/table/") 

How do I load specific hive partition in a dataframe? What was the driver behind removing this feature?

I believe it was efficient. Is there an alternative to achive that in Spark 1.6 ?

As per my understanding, Spark 1.6 loads all partitions and if I filter for specific partitions it is not efficient, it hits memory and throws GC(Garbage Collection) errors because of thousands of partitions get loaded into memory and not the specific partition.

Please guide. Thanks in advance.

回答1:

To add specific partition in a DataFrame using Spark 1.6 we have to do the following first set basePath and then give path of partition needs to be loaded

DataFrame df = hiveContext.read().format("orc").
               option("basePath", "path/to/table/").
               load("path/to/table/entity=xyz")

So above code will load only specific partition in a DataFrame.