How does Sparks RDD.randomSplit actually split the

2019-01-19 12:15发布

问题:

So assume ive got an rdd with 3000 rows. The 2000 first rows are of class 1 and the 1000 last rows are of class2. The RDD is partitioned across 100 partitions.

When calling RDD.randomSplit(0.8,0.2)

Does the function also shuffle the rdd? Our does the splitting simply sample 20% continuously of the rdd? Or does it select 20% of the partitions randomly?

Ideally does the resulting split have the same class distribution as the original RDD. (i.e. 2:1)

Thanks

回答1:

For each range defined by weights array there is a separate mapPartitionsWithIndex transformation which preserves partitioning.

Each partition is sampled using a set of BernoulliCellSamplers. For each split it iterates over the elements of a given partition and selects item if value of the next random Double is in a given range defined by normalized weights. All samplers for a given partition use the same RNG seed. It means it:

  • doesn't shuffle a RDD
  • doesn't take continuous blocks other than by chance
  • takes a random sample from each partition
  • takes non-overlapping samples
  • require n-splits passes over data