When using the reduce()
operation on a parallel stream, the OCP exam book states that there are certain principles the reduce()
arguments must adhere to. Those principles are the following:
- The identity must be defined such that for all elements in the stream u, combiner.apply(identity, u) is equal to u.
- The accumulator operator op must be associative and stateless such that
(a op b) op c
is equal toa op (b op c)
.- The combiner operator must also be associative and stateless and compatible with the identity, such that for all of
u
andt
combiner.apply(u, accumulator.apply(identity, t))
is equal toaccumulator.apply(u,t)
.
The book gives two examples to illustrate these principles, please see the code below:
example for associative:
System.out.println(
Arrays.asList(1, 2, 3, 4, 5, 6)
.parallelStream()
.reduce(0, (a, b) -> (a - b)));
What the book says about this:
It may output -21, 3, or some other value as the accumulator function violates the associativity property.
example for the identity requirement:
System.out.println(
Arrays.asList("w", "o", "l", "f")
.parallelStream()
.reduce("X", String::concat));
What the book says about this:
You can see other problems if we use an identity parameter that is not truly an identity value. It can output
XwXoXlXf
. As part of the parallel process, the identity is applied to multiple elements in the stream, resulting in very unexpected data.
I don't understand those examples. With the accumulator example the accumulator starts with 0 - 1
which is -1
, then -1 - 2
which is -3
, then -6
etc all the way to -21
. I understand that, because the generated arraylist isn't synchronized the results maybe be unpredictable because of the possibility of race conditions etc, but why isn't the accumulator associative? Wouldn't (a+b)
cause unpredictable results too? I really don't see what's wrong with the accumulator being used in the example and why it's not associative, but then again I still don't exactly understand what "associative principle" means.
I don't understand the identity example either. I understand that the result could indeed be XwXoXlXf
if 4 separate threads were to start accumulating with the identity at the same time, but what does that have to do with the identity parameter itself? What exactly would be a proper identity to use then?
I was wondering if anyone could enlighten me a bit more on these principles.
Thank you
Let me give two examples. First where the identity is broken:
Basically you have broken this rule:
The identity value must be an identity for the accumulator function. This means that for all u, accumulator(identity, u) is equal to u
.Or to make is simpler, let's see if that rule holds for some random data from our Stream:
And a second example:
And you invoke this with:
Basically you have broken this rule:
Additionally, the combiner function must be compatible with the accumulator function
or in code :It's not associative since the order of subtraction operations determines the final result.
If you run a serial
Stream
, you'll get the expected result of:On the other hand, for parallel
Stream
s, the work is split to multiple threads. For example, ifreduce
is executed in parallel on 6 threads, and then the intermediate results are combined, you can get a different result:Or, to make a long example short:
Therefore subtraction is not associative.
On the other hand,
a+b
doesn't cause the same problem, since addition is an associative operator (i.e.(a+b)+c == a+(b+c)
).The problem with the identity example is that when reduce is executed in parallel on multiple threads, "X" is appended to the starts of each intermediate result.
If you change the identity value to
""
:you'll get "wolf" instead of "XwXoXlXf".
While the question has already been answered and accepted, I think it can be answered in a simpler, more practical way.
If you don't have a valid
identity
and an associative accumulator/combiner, the result of thereduce
operation will depend on:Stream
contentStream
Associativity
Let's try with an example for non-associative accumulator/combiner (basically, we reduce a list of 50 numbers in a sequence and in parallel by varying the number of threads):
This displays the following results (Oracle JDK 10.0.1) :
This shows that the result depends on the number of threads involved in the reduce calculation.
Notes:
Stream
content and the same number of threads always leads to the same reduced value when ran several times. I suppose this is because the parallel stream uses a deterministicSpliterator
.ForkJoinPool
of 1,2,3,4 or 5 threads.Identity
For
identity
, as Eran wrote with the "XwXoXlXf" example, with 4 threads, each thread will start by using theidentity
as a kind ofString
prefix. But pay attention : while the OCP book suggests that""
and0
are valididentity
, it depends on the accumulator/combiner functions. For example:0
is a valididentity
for accumulator(a,b)->a+b
(becausea+0=a
)1
is a valididentity
for accumulator(a,b)->a*b
(becausea*1=a
, but0
is not valid becausea*0=0
!)