Does multiprocessing copy the object in this scena

2020-07-27 03:22发布

问题:

import multiprocessing
import numpy as np
import multiprocessing as mp
import ctypes

class Test():
    def __init__(self):
        shared_array_base = multiprocessing.Array(ctypes.c_double, 100, lock=False)
        self.a = shared_array = np.ctypeslib.as_array(shared_array_base)

    def my_fun(self,i):
        self.a[i] = 1

if __name__ == "__main__":
    num_cores = multiprocessing.cpu_count()

    t = Test()

    def my_fun_wrapper(i):
        t.my_fun(i)

    with mp.Pool(num_cores) as p:
        p.map(my_fun_wrapper, np.arange(100))

    print(t.a)

In the code above, I'm trying to write a code to modify an array, using multiprocessing. The function my_fun(), executed in each process, should modify the value for the array a[:] at index i which is passed to my_fun() as a parameter. With regards to the code above, I would like to know what is being copied.

1) Is anything in the code being copied by each process? I think the object might be but ideally nothing is.

2) Is there a way to get around using a wrapper function my_fun() for the object?

回答1:

Almost everything in your code is getting copied, except the shared memory you allocated with multiprocessing.Array. multiprocessing is full of unintuitive, implicit copies.


When you spawn a new process in multiprocessing, the new process needs its own version of just about everything in the original process. This is handled differently depending on platform and settings, but we can tell you're using "fork" mode, because your code wouldn't work in "spawn" or "forkserver" mode - you'd get an error about the workers not being able to find my_fun_wrapper. (Windows only supports "spawn", so we can tell you're not on Windows.)

In "fork" mode, this initial copy is made by using the fork system call to ask the OS to essentially copy the whole entire process and everything inside. The memory allocated by multiprocessing.Array is sort of "external" and isn't copied, but most other things are. (There's also copy-on-write optimization, but copy-on-write still behaves as if everything was copied, and the optimization doesn't work very well in Python due to refcount updates.)

When you dispatch tasks to worker processes, multiprocessing needs to make even more copies. Any arguments, and the callable for the task itself, are objects in the master process, and objects inherently exist in only one process. The workers can't access any of that. They need their own versions. multiprocessing handles this second round of copies by pickling the callable and arguments, sending the serialized bytes over interprocess communication, and unpickling the pickles in the worker.


When the master pickles my_fun_wrapper, the pickle just says "look for the my_fun_wrapper function in the __main__ module", and the workers look up their version of my_fun_wrapper to unpickle it. my_fun_wrapper looks for a global t, and in the workers, that t was produced by the fork, and the fork produced a t with an array backed by the shared memory you allocated with your original multiprocessing.Array call.

On the other hand, if you try to pass t.my_fun to p.map, then multiprocessing has to pickle and unpickle a method object. The resulting pickle doesn't say "look up the t global variable and get its my_fun method". The pickle says to build a new Test instance and get its my_fun method. The pickle doesn't have any instructions in it about using the shared memory you allocated, and the resulting Test instance and its array are independent of the original array you wanted to modify.


I know of no good way to avoid needing some sort of wrapper function.