Here is the pseudo code for what I want to do.
import time
def run():
while x < 10000000:
x += 1
if __name__ == "__main__":
p = Process(run)
p.start()
time.sleep(3)
#some code that I don't know that will give me the current value of x
Pythons threading
module seems to be the way to go however I have yet to successfully implement this example.
It really depends on what you're trying to accomplish and the frequency of creation and memory usage of your subprocesses. A few long-lived ones, and you can easily get away with multiple OS-level processes (see the
subprocess
module`). If you're spawning a lot of little ones, threading is faster and has less memory overhead. But with threading you run into problems like "thread safety", the global interpreter lock, and nasty, boring stuff like semaphores and deadlocks.Data sharing strategies between two processes or threads can be roughly divided into two categories: "Let's share a block of memory" (using
Lock
s andMutex
es) and "Let's share copies of data" (using messaging, pipes, orsocket
s). The sharing method is light on memory, but difficult to manage because it means ensuring that one thread doesn't read the same part of shared memory as another thread is writing to it, which is not trivial and hard to debug. The copying method is heavier on memory, but easier to make sense of. Also, it has the distinct advantage of being able to be pretty trivially ported to a network, allowing for distributed computing.You'll also have to think about the underlying OS. I don't know the specifics, but some are better than others at different approaches.
I'd say start with something like RabbitMQ.
Well here it is
turned out to be a duplicate, my version is just more generic.
Everything you need is in the
multiprocessing
module. Perhaps a shared memory object would help here?Note that threading in Python is affected by the Global Interpreter Lock, which essentially prevents multithreaded Python code.