How to parallelize complicated for loops

2019-08-12 08:29发布

I have a complicated for loop which contains multiple operations for multiple records in a loop. The loop looks like this:

for i,j,k in zip(is,js,ks):
    #declare multiple lists.. like
    a = []
    b = []
    #...
    if i:
        for items in i:
            values = items['key'].split("--")
            #append the values to the declared lists
            a.append(values[0])
            b.append(values[1])
    # also other operations with j and k where are is a list of dicts. 
    if "substring" in k:
        for k, v in j["key"].items():
            l = "string"
            t = v
    else:
        for k, v in j["key2"].items():
            l = k
            t = v

            # construct an object with all the lists/params
            content = {
                'sub_content': {
                    "a":a,
                    "b":b,
                    .
                    .
                }
            }

            #form a tuple. We are interested in this tuple.
            data_tuple = (content,t,l)

Considering the above for loop, how do I parallelize it? I've looked into multiprocessing but I have not been able to parallelize such a complex loop. I am also open to suggestions that might perform better here including parallel language paradigms like OpenMP/MPI/OpenACC.

1条回答
放荡不羁爱自由
2楼-- · 2019-08-12 09:10

You can use the Python multiprocessing library. As noted in this excellent answer you should figure out if you need multi-processing or multi-threading.

Bottom Line: If you need multi-threading you should use multiprocessing.dummy. If you are only doing CPU intensive tasks with no IO/dependencies then you can use multiprocessing.

multiprocessing.dummy is exactly the same as multiprocessing module, but uses threads instead (an important distinction - use multiple processes for CPU-intensive tasks; threads for (and during) IO):

Set up the zip object

#!/usr/bin/env python3

import numpy as np

n = 2000
xs = np.arange(n)
ys = np.arange(n) * 2
zs = np.arange(n) * 3

zip_obj = zip(xs, ys, zs)

Simple example function

def my_function(my_tuple):
    iv, jv, kv = my_tuple
    return f"{str(iv)}-{str(jv)}-{str(kv)}"   

Set up the multi-threading.

from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4)
data_tuple = pool.map(my_function, zip_obj)

Your full example

def my_function(my_tuple):
    i, j, k = my_tuple
    #declare multiple lists.. like
    a = []
    b = []
    #...
    if (i):
        for items in i:
            values = items['key'].split("--")
            #append the values to the declared lists
            a.append(values[0])
            b.append(values[1])
     #also other ooperations with j and k where are is a list of dicts. 
     if ("substring" in k):
           for k, v in j["key"].items():
               l = "string"
               t = v
      else:
           for k, v in j["key2"].items():
               l = k
               t = v
    #construct an object called content with all the lists/params like
           content = {
                'sub_content': {
                  "a":a,
                  "b":b,
                  .
                  .
                }
            }
    #form a tuple. We are interested in this tuple.
    return (content,t,l)


from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4)
zip_obj = zip(is,js,ks)
data_tuple = pool.map(my_function, zip_obj)
# Do whatever you need to do w/ data_tuple here
查看更多
登录 后发表回答