Trying to use the code here https://stackoverflow.com/a/15390953/378594 to convert a numpy array into a shared memory array and back. Running the following code:
shared_array = shmarray.ndarray_to_shm(my_numpy_array)
and then passing the shared_array as an argument in the list of argument for a multiprocessing pool:
pool.map(my_function, list_of_args_arrays)
Where list_of_args_arrays
contains my shared array and other arguments.
It results in the following error
PicklingError: Can't pickle <class 'multiprocessing.sharedctypes.c_double_Array_<array size>'>: attribute lookup multiprocessing.sharedctypes.c_double_Array_<array size> failed
Where <array_size>
is the linear size of my numpy array.
I guess something has changed in numpy ctypes or something like that?
Further details:
I only need access to shared information. No editing will be done by the processes.
The function that calls the pool lies within a class. The class is initiated and the function is called by a main.py file.
Apparently when using
multiprocessing.Pool
all arguments are pickled, and so there was no use usingmultiprocessing.Array
. Changing the code so that it uses an array of processes did the trick.I think you are overcomplicating things: There is no need to pickle arrays (especially if they are read only):
you just need to do keep them accessible through some global variable:
(known to work in linux, but may not work in windows, don't know)
If your arrays need to be read and written, you need to use this: Is shared readonly data copied to different processes for Python multiprocessing?