how to get stdout of subprocess in python when rec

2019-05-11 11:05发布

问题:

I have the following simple python script:

import os, subprocess,signal,sys
import time

out = None
sub = None

def handler(signum,frame):
    print("script.py: cached sig: %i " % signum)
    sys.stdout.flush()

    if sub is not None and not sub.poll():
        print("render.py: sent signal to prman pid: ", sub.pid)
        sys.stdout.flush()
        sub.send_signal(signal.SIGTERM)
        sub.wait() # deadlocks....????
        #os.kill(sub.pid, signal.SIGTERM)  # this works
        #os.waitpid(sub.pid,0)             # this works

    for i in range(0,5):
        time.sleep(0.1)
        print("script.py: cleanup %i" % i)
        sys.stdout.flush()

    sys.exit(128+signum)

signal.signal(signal.SIGINT, handler)
signal.signal(signal.SIGUSR2, handler)
signal.signal(signal.SIGTERM, handler)

sub = subprocess.Popen(["./doStuff.sh"], stderr = subprocess.STDOUT)
sub.wait()


print("finished script.py")

doStuff.sh

#!/bin/bash

function trap_with_arg() {
    func="$1" ; shift
    for sig ; do
        trap "$func $sig" "$sig"
    done
}

pid=False

function signalHandler() {

    trap - SIGINT SIGTERM

    echo "doStuff.sh chached sig: $1"
    echo "doStuff.sh cleanup: wait 10s"
    sleep 10s

    # kill ourself to signal calling process we exited on SIGINT
    kill -s SIGINT $$

}

trap_with_arg signalHandler SIGINT SIGTERM
trap "echo 'doStuff.sh ignore SIGUSR2'" SIGUSR2 
# ignore SIGUSR2

echo "doStuff.sh : pid:  $$"
echo "doStuff.sh: some stub error" 1>&2
for i in {1..100}; do
    sleep 1s
    echo "doStuff.sh, rendering $i"
done

when I send the process launched in a terminal by python3 scripts.py & a signal with kill -USR2 -$! the script catches the SIGINT, and waits forever in sub.wait(), a ps -uf shows the following:.

user   27515  0.0  0.0  29892  8952 pts/22   S    21:56   0:00  \_ python script.py
user   27520  0.0  0.0      0     0 pts/22   Z    21:56   0:00      \_ [doStuff.sh] <defunct>

Be aware that doStuff.sh properly handles SIGINT and quits.

I would also like to get the output of stdout when the handler is called? How to do this properly?

Thanks a lot!

回答1:

Your code can't get the child process' stdout because it doesn't redirect its standard streams while calling subprocess.Popen(). It is too late to do anything about it in the signal handler.

If you want to capture stdout then pass stdout=subprocess.PIPE and call .communicate() instead of .wait():

child = subprocess.Popen(command, stdout=subprocess.PIPE)
output = child.communicate()[0]

There is a completely separate issue that the signal handler hangs on the .wait() call on Python 3 (Python 2 or os.waitpid() do not hang here but a wrong child's exit status is received instead). Here's a minimal code example to reproduce the issue:

#!/usr/bin/env python
import signal
import subprocess
import sys


def sighandler(*args):
    child.send_signal(signal.SIGINT)
    child.wait()  # It hangs on Python 3 due to child._waitpid_lock

signal.signal(signal.SIGUSR1, sighandler)
child = subprocess.Popen([sys.executable, 'child.py'])
sys.exit("From parent %d" % child.wait())  # return child's exit status

where child.py:

#!/usr/bin/env python
"""Called from parent.py"""
import sys
import time

try:
    while True:
        time.sleep(1)
except KeyboardInterrupt:  # handle SIGINT
    sys.exit('child exits on KeyboardInterrupt')

Example:

$ python3 parent.py &
$ kill -USR1 $!
child exits on KeyboardInterrupt
$ fg
... running    python3 parent.py

The example shows that the child has exited but the parent is still running. If you press Ctrl+C to interrupt it; the traceback shows that it hangs on with _self._waitpid_lock: statement inside the .wait() call. If self._waitpid_lock = threading.Lock() is replaced with self._waitpid_lock = threading.RLock() in subprocess.py then the effect is the same as using os.waitpid() -- it doesn't hang but the exit status is incorrect.

To avoid the issue, do not wait for child's status in the signal handler: call send_signal(), set a simple boolean flag and return from the hanlder instead. In the main code, check the flag after child.wait() (before print("finished script.py") in your code in the question), to see whether the signal has been received (if it is not clear from child.returncode). If the flag is set; call the appropriate cleanup code and exit.



回答2:

You should look into subprocess.check_output

proc_output = subprocess.check_output(commands_list, stderr=subprocess.STDOUT)

you can surround it in a try except and then:

except subprocess.CalledProcessError, error:
    create_log = u"Creation Failed with return code {return_code}\n{proc_output}".format(
        return_code=error.returncode, proc_output=error.output
    )


回答3:

I can only wait for the process by using

  os.kill(sub.pid, signal.SIGINT)
  os.waitpid(sub.pid,0)

instead of

  sub.send_signal(signal.SIGINT)
  sub.wait() # blocks forever

This has something to do with process groups on UNIX, which I dont really understand: I think the process ./doStuff.sh does not receive the signal because childs in the same process groups do not receive the signal. (I am not sure if this is correct). Hopefully somebody might elaborate on this issue a bit more.

The output till the handler gets called is pushed to the stdout of the calling bash (console).