I'm trying to run multiple threads on GPUs using the Pycuda example MultipleThreads. When I run my python file, I get the following error message:
(/root/anaconda3/) root@109c7b117fd7:~/pycuda# python multiplethreads.py
Exception in thread Thread-5:
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "multiplethreads.py", line 22, in run
test_kernel(self.array_gpu)
File "multiplethreads.py", line 36, in test_kernel
""")
TypeError: 'module' object is not callable
-------------------------------------------------------------------
PyCUDA ERROR: The context stack was not empty upon module cleanup.
-------------------------------------------------------------------
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
-------------------------------------------------------------------
Exception in thread Thread-6:
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "multiplethreads.py", line 22, in run
test_kernel(self.array_gpu)
File "multiplethreads.py", line 36, in test_kernel
""")
TypeError: 'module' object is not callable
Aborted
I have tried to change the way I import threading
from import threading
to from threading import Thread
, but the error still persists. Could anyone be able to see what is the problem that I'm encountering?
Problem has been solved. Just a minor error in the import command.
Instead of:
I mistakenly wrote:
It was a bad mistake.