Assume I am exporting a c++ worker class to python via Boost.Python. The worker will process a task in a different thread. Once completed, the worker will notify the python caller via a callback.
Here's a piece of example c++ code:
class Worker
{
public:
void run()
{
_thread = std::thread( [=] () {
//Initialize and acquire the global interpreter lock
PyEval_InitThreads();
//Ensure that the current thread is ready to call the Python C API
PyGILState_STATE state = PyGILState_Ensure();
//invoke the python function
boost::python::call<void>(this->_callback);
//release the global interpreter lock so other threads can resume execution
PyGILState_Release(state);
});
}
void setPyCallback(PyObject * callable) { _callback = callable; }
private:
std::thread _thread;
PyObject * _callback;
};
Now I have the python code in a script file test.py as:
$ cat test.py
import time
import worker
def mycallback():
print "callback called"
a = worker.Worker()
a.setPyCallback(mycallback)
a.run()
time.sleep(1)
If I run above scripts in an interactive mode, e.g. ipython, it works without a problem.
Problem:
However, running these scripts from the command line like python test.py
will just stuck at PyGILState_STATE state = PyGILState_Ensure();
.
If I understood correctly, the worker was trying acquire the ready state to execute the callback. While unfortunately the main python thread is busy sleeping - deadlock.
Question: What should I change in the python script / c++ code such that executing the python script file can request a task to be done in c++, wait a bit and get the result printed asynchronously?
===================
With @Giulio's hint, I am now able to solve the problem: PyEval_InitThreads
should only be called in the main thread, rather than c++-managed thread, as docs said.
By moving the PyEval_InitThreads();
from the lambda to the head of function Worker::run()
, the callback part worked flawless now (with python main thread sleeping). However I need to emphasise that PyEval_InitThreads()
is still required in run()
.