Multiple independent embedded Python Interpreters

2019-01-17 11:00发布

问题:

Embedding Python interpreter in a C/C++ application is well documented. What is the best approach to run multiple python interpreter on multiple operating system threads (i.e. one interpreter on one operating system thread within the same process) which are invoked from the C/C++ application? Such applications may also have problems related to memory fragmentation and limitations of Py_Finalize().

One such approach can be the following:

  1. Python thread and hence GIL disabled in pyconfig.h to keep it simple (#undef WITH_THREAD)
  2. All mutable global variables of Python Interpreter source code moved to heap-allocated struct referenced via Thread Local Storage (Reference: Python on a Phone).

My questions are:

  1. Is there any better approach?
  2. Are there any tools which can automate conversion of global variables of Python Interpreter source code to heap-allocated struct referenced via TLS (Thread Local Storage)?

Similar topics are discussed here:

  • Multiple independent Python interpreters in a C/C++ program?
  • Multiple python interpreters within the same process
  • Lua Versus Python

回答1:

It's not exactly an answer to your question, but you could use separate processes instead of threads, then the problems should vanish.

Pros:

  • No need hacking python (and making sure the result works in all of the intended cases)
  • Probably less development effort overall
  • Easy upgrading to new python versions
  • Clearly defined interfaces between different processes, thus easier to get right and debug

Cons:

  • Maybe slightly more overweight, depending on your platform (relatively light-weight processes on linux)

If you use shared memory for IPC, your resulting application code shouldn't differ too much from what you'd get with threads.

Given that some people are arguing you should always use processes over threads, I'd at least consider it as an alternative if it fits your constraints in any way.