I am trying out multiprocessor programming with Python. Take a divide and conquer algorithm like Fibonacci
for example. The program flow of execution would branch out like a tree and execute in parallel. In other words, we have an example of nested parallelism.
From Java, I have used a threadpool pattern to manage resources, since the program could branch out very quickly and create too many short-lived threads. A single static (shared) threadpool can be instantiated via ExecutorService
.
I would expect the same for Pool, but it appears that Pool object is not to be globally shared. For example, sharing the Pool using multiprocessing.Manager.Namespace()
will lead to the error.
pool objects cannot be passed between processes or pickled
I have a 2-part question:
- What am I missing here; why shouldn't a Pool be shared between processes?
- What is a pattern for implementing nested parallelism in Python? If possible, maintaining a recursive structure, and not trading it for iteration.
from concurrent.futures import ThreadPoolExecutor
def fibonacci(n):
if n < 2:
return n
a = pool.submit(fibonacci, n - 1)
b = pool.submit(fibonacci, n - 2)
return a.result() + b.result()
def main():
global pool
N = int(10)
with ThreadPoolExecutor(2**N) as pool:
print(fibonacci(N))
main()
Java
public class FibTask implements Callable<Integer> {
public static ExecutorService pool = Executors.newCachedThreadPool();
int arg;
public FibTask(int n) {
this.arg= n;
}
@Override
public Integer call() throws Exception {
if (this.arg > 2) {
Future<Integer> left = pool.submit(new FibTask(arg - 1));
Future<Integer> right = pool.submit(new FibTask(arg - 2));
return left.get() + right.get();
} else {
return 1;
}
}
public static void main(String[] args) throws Exception {
Integer n = 14;
Callable<Integer> task = new FibTask(n);
Future<Integer> result =FibTask.pool.submit(task);
System.out.println(Integer.toString(result.get()));
FibTask.pool.shutdown();
}
}
I'm not sure if it matters here, but I am ignoring the difference between "process" and "thread"; to me they both mean "virtualized processor". My understanding is, the purpose of a Pool is for sharing of a "pool" or resources. Running tasks can make a request to the Pool. As parallel tasks complete on other threads, those threads can be reclaimed and assigned to new tasks. It doesn't make sense to me to disallow sharing of the pool, so that each thread must instantiate its own new pool, since that would seem to defeat the purpose of a thread pool.
Not all object/instances are pickable/serializable, in this case, pool uses threading.lock which is not pickable:
or better:
If you think about it, it makes sense, a lock is a semaphore primitive managed by the operating system (since python uses native threads). Being able to pickle and save that object state inside the python runtime would really not accomplish anything meaningful since its true state is being kept by the OS.
Now, for the prestige, everything I mentioned above doesn't really apply to your example since you are using threads (ThreadPoolExecutor) and not processes (ProcessPoolExecutor) so no data sharing across process has to happen.
Your java example just appears to be more efficient since the thread pool you are using (CachedThreadPool) is creating new threads as needed whereas the python executor implementations are bounded and require a explicit max thread count (max_workers). There's a little bit of syntax differences between the languages that also seems to be throwing you off (static instances in python are essentially anything not explicitly scoped) but essentially both examples would created exactly the same number of threads in order to execute. For instance, here's an example using a fairly naive CachedThreadPoolExecutor implementation in python:
Performance tuning:
I strongly suggest looking into gevent since it will give you high concurrency without the thread overhead. This is not always the case but your code is actually the poster child for gevent usage. Here's an example:
Completely unscientific but on my computer the code above runs 9x faster than its threaded equivalent.
I hope this helps.
You generally can't share OS threads between processes at all, regardless of language.
You can arrange to share access to the pool manager with worker processes, but that's probably not a good solution to any problem; see below.
This depends a lot on your data.
On CPython, the general answer is to use a data structure that implements efficient parallel operations. A good example of this is NumPy's optimized array types: here is an example of using them to split a big array operation across multiple processor cores.
The Fibonacci function implemented using blocking recursion is a particularly pessimal fit for any worker-pool-based approach, though: fib(N) will spend much of its time just tying up N workers doing nothing but waiting for other workers. There are many other ways to approach the Fibonacci function specifically, (e.g. using CPS to eliminate the blocking and fill a constant number of workers), but it's probably better to decide your strategy based on the actual problems you will be solving, rather than examples like this.