I use CUDA for my code, but it still slow run. Therefore I change it to run parallel using multiprocessing (pool.map) in python. But I have CUDA ERROR: initialization error
This Is function :
def step_M(self, iter_training):
gpe, e_tuple_list = iter_training
g = gpe[0]
p = gpe[1]
em_iters = gpe[2]
e_tuple_list = sorted(e_tuple_list, key=lambda tup: tup[0])
data = self.X[e_tuple_list[0][0]:e_tuple_list[0][1]]
cluster_indices = np.array(range(e_tuple_list[0][0], e_tuple_list[0][1], 1), dtype=np.int32)
for i in range(1, len(e_tuple_list)):
d = e_tuple_list[i]
cluster_indices = np.concatenate((cluster_indices, np.array(range(d[0], d[1], 1), dtype=np.int32)))
data = np.concatenate((data, self.X[d[0]:d[1]]))
g.train_on_subset(self.X, cluster_indices, max_em_iters=em_iters)
return g, cluster_indices, data
And here code call:
pool = Pool()
iter_bic_list = pool.map(self.step_M, iter_training.items())
Try
I found this is a problem with cuda putting a mutex for a process ID. So when you use the multiprocessing module another subprocess with a separate pid is spawned. And it is not able to access because of the mutex for the GPU.
A quick solution which I found to be working is using the threading module instead of the multiprocessing module.
So basically the same pid which loads the network in the gpu should use it.
I realize this is a bit old but I ran into the same problem, while running under celery in my case:
Switching from prefork to an eventlet based pool has resolved the issue. Your code could be updated similarly to: