Does spark keep all elements of an RDD[K,V] for a

2019-04-10 20:33发布

问题:

Consider I have a PairedRDD of,say 10 partitions. But the keys are not evenly distributed, i.e, all the 9 partitions having data belongs to a single key say a and rest of the keys say b,c are there in last partition only.This is represented by the below figure:

Now if I do a groupByKey on this rdd, from my understanding all data for same key will eventually go to different partitions or no data for the same key will not be in multiple partitions. Please correct me if I am wrong.

If that is the case then there can be a chance that the partition for key a can be of size that may not fit in a worker's RAM. In that case what spark will do ? My assumption is like it will spill the data to worker's disk. Is that correct? Or how spark handle such situations

回答1:

Does spark keep all elements (...) for a particular key in a single partition after groupByKey

Yes, it does. This is a whole point of the shuffle.

the partition for key a can be of size that may not fit in a worker's RAM. In that case what spark will do

Size of a particular partition is not the biggest issue here. Partitions are represented using lazy Iterators and can easily store data which exceeds amount of available memory. The main problem is non-lazy local data structure generated in the process of grouping.

All values for the particular key are stored in memory as a CompactBuffer so a single large group can result in OOM. Even if each record separately fits in memory you may still encounter serious GC issues.

In general:

  • It is safe, although not optimal performance wise, to repartition data where amount of data assigned to a partition exceeds amount of available memory.
  • It is not safe to use PairRDDFunctions.groupByKey in the same situation.

Note: You shouldn't extrapolate this to different implementations of groupByKey though. In particular both Spark Dataset and PySpark RDD.groupByKey use more sophisticated mechanisms.