What is the difference between reduce
vs. fold
with respect to their technical implementation?
I understand that they differ by their signature as fold
accepts additional parameter (i.e. initial value) which gets added to each partition output.
- Can someone tell about use case for these two actions?
- Which would perform better in which scenario consider 0 is used for
fold
?
Thanks in advance.
There is no practical difference when it comes to performance whatsoever:
RDD.fold
action is using fold
on the partition Iterators
which is implemented using foldLeft
.
RDD.reduce
is using reduceLeft
on the partition Iterators
.
Both methods keep mutable accumulator and process partitions sequentially using simple loops with foldLeft
implemented like this:
foreach (x => result = op(result, x))
and reduceLeft
like this:
for (x <- self) {
if (first) {
...
}
else acc = op(acc, x)
}
Practical difference between these methods in Spark is only related to their behavior on empty collections and ability to use mutable buffer (arguably it is related to performance). You'll find some discussion in Why is the fold action necessary in Spark?
Moreover there is no difference in the overall processing model:
- Each partition is processed sequentially using a single thread.
- Partitions are processed in parallel using multiple executors / executor threads.
- Final merge is performed sequentially using a single thread on the driver.