How to split a spark dataframe with equal records

2019-09-16 05:32发布

问题:

I am using df.randomSplit() but it is not splitting into equal rows. Is there any other way I can achieve it?

回答1:

In my case I needed balanced (equal sized) partitions in order to perform a specific cross validation experiment.

For that you usually:

  1. Randomize the dataset
  2. Apply modulus operation to assign each element to a fold (partition)

After this step you will have to extract each partition using filter, afaik there is still no transformation to separate a single RDD into many.

Here is some code in scala, it only uses standard spark operations so it should be easy to adapt to python:

val npartitions = 3

val foldedRDD = 
   // Map each instance with random number
   .zipWithIndex
   .map ( t => (t._1, t._2, new scala.util.Random(t._2*seed).nextInt()) )
   // Random ordering
   .sortBy( t => (t._1(m_classIndex), t._3) )
   // Assign each instance to fold
   .zipWithIndex
   .map( t => (t._1, t._2 % npartitions) )

val balancedRDDList =  
    for (f <- 0 until npartitions) 
    yield foldedRDD.filter( _._2 == f )