What I want
I want to work on optimization of fork/join algorithm. By optimization I mean just calculation of optimal number of threads, or if you want - calculation of SEQUENTIAL_THRESHOLD
(see code below).
// PSEUDOCODE
Result solve(Problem problem) {
if (problem.size < SEQUENTIAL_THRESHOLD)
return solveSequentially(problem);
else {
Result left, right;
INVOKE-IN-PARALLEL {
left = solve(extractLeftHalf(problem));
right = solve(extractRightHalf(problem));
}
return combine(left, right);
}
}
How do I imagine that
For example, I want to calculate the product of big array. Then I just evaluate all components and get the optimal threads amount:
SEQUENTIAL_THRESHOLD = PC * IS / MC
(just example)
PC
- number of processor cores;
IS
- constant, that indicates the optimal array size with one processor core and the simplest operation on data (for example reading);
MC
- multiply operation cost;
Suppose MC = 15; PC = 4 and IS = 10000; SEQUENTIAL_THRESHOLD = 2667
. Than if subtask-array is bigger than 2667 I'll fork it.
Broad questions
- Is it possible to make SEQUENTIAL_THRESHOLD formula in such way?
- Is it possible to accomplish the same for more complex computation: not only for operations on arrays/collections and sorting?
Narrow question:
Do already exist some investigations about calculation of SEQUENTIAL_THRESHOLD
for arrays/collections/sorting? How do they accomplish that?
Updated 07 March 2014:
- If there is no way to write a single formula for threshold calculation, can I write an util which will perform predefined tests on PC, and than gets the optimal threshold? Is that also impossible or not?
- What can Java 8 Streams API do? Can it help me? Does Java 8 Streams API eliminate a need in Fork/Join?
This is a very interesting problem to investigate. I have written this simple code to test the optimal value of sequential threshold. I was unable to reach any concrete conclusions though, most probably because I am running it on a old laptop with only 2 processors. The only consistent observation after many runs was that the time taken drops rapidly till sequential threshold of 100. Try running this code and let me know what you find. Also at the bottom I have attached a python script for plotting the results so that we can visually see the trend.
and here is the Python script for plotting results:
There is absolutely, positively no way to calculate a proper threshold unless you are intimate with the execution environment. I maintain a fork/join project on sourceforge.net and this is the code I use in most built-in-functions:
Edit on 9 March:
How can you possibly have a general utility that can know not only the processor speed, memory available, number of processors, etc. (the physical environment) but the intention of the software? The answer is you cannot. Which is why you need to develop a routine for each environment. The above method is what I use for basic arrays (vectors.) I use another for most matrix processing:
As far as Java8 streams: They use the F/J framework under the hood and you cannot specify a threshold.
You cannot boil this down to a simple formula for several reasons:
Each PC will have vastly different parameters depending not only on the core, but also on other factors like RAM timing or background tasks.
Java itself is optimizing loops on the fly during execution. So a momentary perfect setting could be sub-optimal a few seconds later. Or worse: the adjustment could prevent perfect optimization all together.
The only way to go that I can see is to dynamically adjust the values in some form of AI or genetic algorithm. However that includes that the program frequently checks non-optimal settings just to determine whether or not the current setting is still the best. So it is questionable if the speed gained is actually higher than the speed lost for trying other settings. In the end probably only a solution during an initial learning phase, while further executions then use those trained values as fixed numbers.
As this not only costs time but also greatly increases the code complexity, I don't think this is an option for most programs. Often it is more beneficial to not even use Fork-Join in the first place, as there are many other parallelization options that might better suit the problem.
An idea for a "genetic" algorithm would be to measure the loop efficiency each run, then have a background hash-map
loop-parameters -> execution time
that is constantly updated, and the fastest setting is selected for most of the runs.