I'd like to select in my main function which module to import based on an argument passed to the Python script. So, I'm using one of
blah = importlib.import_module("blah1")
blah = importlib.import_module("blah2")
where the 'blahX' are different implementations of the same interface.
I also want to pass the work off to different processes using the multiprocessing
module.
blah = None
def f(a, b):
print blah.f(a,b)
if __name__ == '__main__':
# call importlib.import_module here...
a = 1
b = 2
p = multiprocessing.Process(target=f, args=(a, b))
p.start()
p.join()
The problem is that the functions passed to multiprocessing.Process
aren't aware of the module I imported in main. This is different than if I use import
import blah1 as blah
#import blah2 as blah
but then I loose the ability to choose a module at run-time.
How can I fix this design?
When you call
mp.Process(...)
, the multiprocessing module forks a subprocess (on Unix) or starts a new Python process and imports the calling module (on Windows). On Unix your current code would work because all globals at the time of the fork are copied by the new subprocess.On Windows, however, the calling module is imported. Since the definition of blah, e.g.,
is protected inside
the new definition of
blah
does not transfer to the subprocess.So to make it work with Windows, you could pass the name of the module to the target function, and call
importlib.import_module
there: