Let's consider a function f with one parameter to map (that varies) and several fix parameters:
f(var, param1, param2, param3, param4)
The param i
are fixed values (like int, float, booleans, ...) and the var
parameter is a list of objects.
Current approach:
p1 = 0
p2 = 200
var = [[Object(x, p1, p2), Object(y, p1, p2), Object(z, p1, p2 = test = True)] for x in range(5) for y in range(10) for z in range(25)]
for v in var:
f(v, param1, param2, param3, param4)
Since the computation on one element of var
does not depend of the others, I actually slice my list var
and starts the program N times with N different slices, thus I got N programs running on the N core of my computer.
It's kind of a manual way to do multiprocessing. However, it is not really convenient to keep track of what was done and what has still to be computed.
I would like to implement the multithreading / multiprocessing directly into the program.
Without fix parameter, I found this way that seems to works:
from multiprocessing import Pool
p = Pool(processes = 16) # 16 cores.
p.map(f, var)
p.terminate()
With the small code above, I didn't use the same f function, it was for testing purpose only. How can I do that with my f function which has also fix parameters?
What is the best way? Thanks!
Version of python: 3.6
EDIT: I would like also to have a tracking of the progression. Currently, my code is:
for i,v in enumerate(var):
print ("{} / {}".format(i, len(var))
f(v, param1, param2, param3, param4)
Could this be done as well with the multiprocessing?
If I understand you well, your question can be rewrite into:
How to easily map a function with several fixed parameters using multiprocessing?
something like
p.starmap(f, [(v, fix1, fix2, fix3) for v in dynamics])
?I think you can wrap your
f
function.For example:
Then you can use it like
p.map(fixed_para_wrapper, dynamics)
.