I have a python script which launches several processes. Each process basically just calls a shell script:
from multiprocessing import Process
import os
import logging
def thread_method(n = 4):
global logger
command = "~/Scripts/run.sh " + str(n) + " >> /var/log/mylog.log"
if (debug): logger.debug(command)
os.system(command)
I launch several of these threads, which are meant to run in the background. I want to have a timeout on these threads, such that if it exceeds the timeout, they are killed:
t = []
for x in range(10):
try:
t.append(Process(target=thread_method, args=(x,) ) )
t[-1].start()
except Exception as e:
logger.error("Error: unable to start thread")
logger.error("Error message: " + str(e))
logger.info("Waiting up to 60 seconds to allow threads to finish")
t[0].join(60)
for n in range(len(t)):
if t[n].is_alive():
logger.info(str(n) + " is still alive after 60 seconds, forcibly terminating")
t[n].terminate()
The problem is that calling terminate() on the process threads isn't killing the launched run.sh script - it continues running in the background until I either force kill it from the command line, or it finishes internally. Is there a way to have terminate also kill the subshell created by os.system()?
You should use an event to signal the worker to terminate, run the subprocess with subprocess
module, then terminate it with Popen.terminate()
. Calling Process.terminate()
will not allow it worker to clean up. See the documentation for Process.terminate()
.
Use subprocess
instead, whose objects have a terminate()
method explicitly for this.
In Python 3.3, the subprocess module supports a timeout: http://docs.python.org/dev/library/subprocess.html
For other solutions regarding Python 2.x, please have a look in this thread: Using module 'subprocess' with timeout
Based on Stop reading process output in Python without hang?:
import os
import time
from subprocess import Popen
def start_process(n, stdout):
# no need for `global logger` you don't assign to it
command = [os.path.expanduser("~/Scripts/run.sh"), str(n)]
logger.debug(command) # no need for if(debug); set logging level instead
return Popen(command, stdout=stdout) # run directly
# no need to use threads; Popen is asynchronous
with open('/tmp/scripts_output.txt') as file:
processes = [start_process(i, file) for i in range(10)]
# wait at most timeout seconds for the processes to complete
# you could use p.wait() and signal.alarm or threading.Timer instead
endtime = time.time() + timeout
while any(p.poll() is None for p in processes) and time.time() < endtime:
time.sleep(.04)
# terminate unfinished processes
for p in processes:
if p.poll() is None:
p.terminate()
p.wait() # blocks if `kill pid` is ignored
Use p.wait(timeout)
if it is available.