Parfor for Python

2019-03-08 22:18发布

I am looking for a definitive answer to MATLAB's parfor for Python (Scipy, Numpy).

Is there a solution similar to parfor? If not, what is the complication for creating one?

UPDATE: Here is a typical numerical computation code that I need speeding up

import numpy as np

N = 2000
output = np.zeros([N,N])
for i in range(N):
    for j in range(N):
        output[i,j] = HeavyComputationThatIsThreadSafe(i,j)

An example of a heavy computation function is:

import scipy.optimize

def HeavyComputationThatIsThreadSafe(i,j):
    n = i * j

    return scipy.optimize.anneal(lambda x: np.sum((x-np.arange(n)**2)), np.random.random((n,1)))[0][0,0]

6条回答
趁早两清
2楼-- · 2019-03-08 22:52

There are many Python frameworks for parallel computing. The one I happen to like most is IPython, but I don't know too much about any of the others. In IPython, one analogue to parfor would be client.MultiEngineClient.map() or some of the other constructs in the documentation on quick and easy parallelism.

查看更多
三岁会撩人
3楼-- · 2019-03-08 22:58

The one built-in to python would be multiprocessing docs are here. I always use multiprocessing.Pool with as many workers as processors. Then whenever I need to do a for-loop like structure I use Pool.imap

As long as the body of your function does not depend on any previous iteration then you should have near linear speed-up. This also requires that your inputs and outputs are pickle-able but this is pretty easy to ensure for standard types.

UPDATE: Some code for your updated function just to show how easy it is:

from multiprocessing import Pool
from itertools import product

output = np.zeros((N,N))
pool = Pool() #defaults to number of available CPU's
chunksize = 20 #this may take some guessing ... take a look at the docs to decide
for ind, res in enumerate(pool.imap(Fun, product(xrange(N), xrange(N))), chunksize):
    output.flat[ind] = res
查看更多
仙女界的扛把子
4楼-- · 2019-03-08 23:07

This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.

To parallelize your example, you'd need to define your functions with the @ray.remote decorator, and then invoke them with .remote.

import numpy as np
import time

import ray

ray.init()

# Define the function. Each remote function will be executed 
# in a separate process.
@ray.remote
def HeavyComputationThatIsThreadSafe(i, j):
    n = i*j
    time.sleep(0.5) # Simulate some heavy computation. 
    return n

N = 10
output_ids = []
for i in range(N):
    for j in range(N):
        # Remote functions return a future, i.e, an identifier to the 
        # result, rather than the result itself. This allows invoking
        # the next remote function before the previous finished, which
        # leads to the remote functions being executed in parallel.
        output_ids.append(HeavyComputationThatIsThreadSafe.remote(i,j))

# Get results when ready.
output_list = ray.get(output_ids)
# Move results into an NxN numpy array.
outputs = np.array(output_list).reshape(N, N)

# This program should take approximately N*N*0.5s/p, where
# p is the number of cores on your machine, N*N
# is the number of times we invoke the remote function,
# and 0.5s is the time it takes to execute one instance
# of the remote function. For example, for two cores this
# program will take approximately 25sec. 

There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.

Note: One point to keep in mind is that each remote function is executed in a separate process, possibly on a different machine, and thus the remote function's computation should take more than invoking a remote function. As a rule of thumb a remote function's computation should take at least a few 10s of msec to amortize the scheduling and startup overhead of a remote function.

查看更多
何必那么认真
5楼-- · 2019-03-08 23:08

I tried all solutions here, but found that the simplest way and closest equivalent to matlabs parfor is numba's prange.

Essentially you change a single letter in your loop, range to prange:

from numba import autojit, prange

@autojit
def parallel_sum(A):
    sum = 0.0
    for i in prange(A.shape[0]):
        sum += A[i]

    return sum
查看更多
劳资没心,怎么记你
6楼-- · 2019-03-08 23:11

Jupyter Notebook

To see an example consider you want to write the equivalence of this Matlab code on in Python

matlabpool open 4
parfor n=0:9
   for i=1:10000
       for j=1:10000
           s=j*i   
       end
   end
   n
end
disp('done')

The way one may write this in python particularly in jupyter notebook. You have to create a function in the working directory (I called it FunForParFor.py) which has the following

def func(n):
    for i in range(10000):
        for j in range(10000):
            s=j*i
    print(n)

Then I go to my Jupyter notebook and write the following code

import multiprocessing  
import FunForParFor

if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=4)
    pool.map(FunForParFor.func, range(10))
    pool.close()
    pool.join()   
    print('done')

This has worked for me! I just wanted to share it here to give you a particular example.

查看更多
迷人小祖宗
7楼-- · 2019-03-08 23:16

I've always used Parallel Python but it's not a complete analog since I believe it typically uses separate processes which can be expensive on certain operating systems. Still, if the body of your loops are chunky enough then this won't matter and can actually have some benefits.

查看更多
登录 后发表回答