I've trouble to understand Round Robin Partitioning in Spark. Consider the following exampl. I split a Seq of size 3 into 3 partitions:
val df = Seq(0,1,2).toDF().repartition(3)
df.explain
== Physical Plan ==
Exchange RoundRobinPartitioning(3)
+- LocalTableScan [value#42]
Now if I inspect the partitions, I get:
df
.rdd
.mapPartitionsWithIndex{case (i,rows) => Iterator((i,rows.size))}
.toDF("partition_index","number_of_records")
.show
+---------------+-----------------+
|partition_index|number_of_records|
+---------------+-----------------+
| 0| 0|
| 1| 2|
| 2| 1|
+---------------+-----------------+
If I do the same with Seq of size 8 and split it into 8 partitions, I get even worse skew:
(0 to 7).toDF().repartition(8)
.rdd
.mapPartitionsWithIndex{case (i,rows) => Iterator((i,rows.size))}
.toDF("partition_index","number_of_records")
.show
+---------------+-----------------+
|partition_index|number_of_records|
+---------------+-----------------+
| 0| 0|
| 1| 0|
| 2| 0|
| 3| 0|
| 4| 0|
| 5| 0|
| 6| 4|
| 7| 4|
+---------------+-----------------+
Can somebody explain this behavior. As far as I understand round robin partitioning, all partitions show be ~same size.
I can't explain why but somehow it is link to the local master.
if you explicit set :
--master local => 1
row per partition (no parallelism)--master "local[2]" => 2
rows per partition (4 partitions empty)--master "local[4]" => 4
rows per partition (6 partitions empty)--master "local[8]" => 8
rows per partition (7 partitions empty)(Checked for Spark version 2.1-2.4)
As far as I can see from
ShuffleExchangeExec
code, Spark tries to partition the rows directly from original partitions (viamapPartitions
) without bringing anything to the driver.The logic is to start with a randomly picked target partition and then assign partitions to the rows in a round-robin method. Note that "start" partition is picked for each source partition and there could be collisions.
The final distribution depends on many factors: a number of source/target partitions and the number of rows in your dataframe.