I have some code that looks like this:
for item in list:
<bunch of slow python code, depending only on item>
I want to speed this up by parallelizing the loop. Normally the multiprocessing
module would be perfect for this (see the answers to this question), but it was added in python 2.6 and I'm stuck using 2.4.
What's the best way to parallelize a python loop in python 2.4?
You might be looking for "fork," which will make it easy to use the specific item.
- http://docs.python.org/release/2.4/lib/os-process.html
- http://en.wikipedia.org/wiki/Fork_%28operating_system%29#Example_in_Python
Your for loop will need to look a little different, though -- you want to break out as soon as fork returns zero.
import os
L = ["a", "b", "c"]
for item in L:
pid = os.fork()
if pid == 0: break
else: print "Forked:", pid
if pid != 0: print "Main Execution, Ends"
else: print "Execution:", item
I'm not familiar with using python 2.4, but have you tried using subprocess.Popen and just spawning new processes?
from subprocess import Popen
for x in range(n):
processes.append(Popen('python doWork.py',shell = True))
for process in processes:
process.wait()