I am new to Spark and Scala. I was confused about the way reduceByKey function works in Spark. Suppose we have the following code:
val lines = sc.textFile("data.txt")
val pairs = lines.map(s => (s, 1))
val counts = pairs.reduceByKey((a, b) => a + b)
The map function is clear: s is the key and it points to the line from data.txt
and 1 is the value.
However, I didn't get how the reduceByKey works internally? Does "a" points to the key? Alternatively, does "a" point to "s"? Then what does represent a + b? how are they filled?
One requirement for the
reduceByKey
function is that is must be associative. To build some intuition on howreduceByKey
works, let's first see how an associative associative function helps us in a parallel computation:As we can see, we can break an original collection in pieces and by applying the associative function, we can accumulate a total. The sequential case is trivial, we are used to it: 1+2+3+4+5+6+7+8+9+10.
Associativity lets us use that same function in sequence and in parallel.
reduceByKey
uses that property to compute a result out of an RDD, which is a distributed collection consisting of partitions.Consider the following example:
In spark, data is distributed into partitions. For the next illustration, (4) partitions are to the left, enclosed in thin lines. First, we apply the function locally to each partition, sequentially in the partition, but we run all 4 partitions in parallel. Then, the result of each local computation are aggregated by applying the same function again and finally come to a result.
reduceByKey
is an specialization ofaggregateByKey
aggregateByKey
takes 2 functions: one that is applied to each partition (sequentially) and one that is applied among the results of each partition (in parallel).reduceByKey
uses the same associative function on both cases: to do a sequential computing on each partition and then combine those results in a final result as we have illustrated here.Let's break it down to discrete methods and types. That usually exposes the intricacies for new devs:
becomes
and renaming the variables makes it a little more explicit
So, we can now see that we are simply taking an accumulated value for the given key and summing it with the next value of that key. NOW, let's break it further so we can understand the key part. So, let's visualize the method more like this:
So, you can see that the reduceByKey takes the boilerplate of finding the key and tracking it so that you don't have to worry about managing that part.
Deeper, truer if you want
All that being said, that is a simplified version of what happens as there are some optimizations that are done here. This operation is associative, so the spark engine will perform these reductions locally first (often termed map-side reduce) and then once again at the driver. This saves network traffic; instead of sending all the data and performing the operation, it can reduce it as small as it can and then send that reduction over the wire.
In your example of
a
andb
are bothInt
accumulators for_2
of the tuples inpairs
.reduceKey
will take two tuples with the same values
and use their_2
values asa
andb
, producing a newTuple[String,Int]
. This operation is repeated until there is only one tuple for each keys
.Unlike non-Spark (or, really, non-parallel)
reduceByKey
where the first element is always the accumulator and the second a value,reduceByKey
operates in a distributed fashion, i.e. each node will reduce it's set of tuples into a collection of uniquely-keyed tuples and then reduce the tuples from multiple nodes until there is a final uniquely-keyed set of tuples. This means as the results from nodes are reduced,a
andb
represent already reduced accumulators.