How do I close the stdout-pipe when killing a proc

2020-05-27 13:28发布

I wonder if it is possible to shut down the communication pipe when killing a subprocess started in a different thread. If I do not call communicate() then kill() will work as expected, terminating the process after one second instead of five.

I found a discussion of a similar problem here, but I got no real answers. I assume that I either have to be able to close the pipe or to explicitly kill the sub-subprocess (which is "sleep" in the example) and kill that to unblock the pipe.

I also tried to find the answer her on SO, but I only found this and this and this, which do not directly address this problem as far as I can tell (?).

So the thing I want to do is to be able to run a command in a second thread and get all its output, but be able to kill it instantly when I so desire. I could go via a file and tail that or similar, but I think there should be a better way to do this?

import subprocess, time
from threading import Thread

process = None

def executeCommand(command, runCommand):
    Thread(target=runCommand, args=(command,)).start()

def runCommand(command):
    global process
    args = command.strip().split()
    process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE)

    for line in process.communicate():
        if line:
            print "process:", line,

if __name__ == '__main__':
    executeCommand("./ascript.sh", runCommand)
    time.sleep(1)
    process.kill()

This is the script:

#!/bin/bash
echo "sleeping five"
sleep 5
echo "slept five"

Output

$ time python poc.py 
process: sleeping five

real    0m5.053s
user    0m0.044s
sys 0m0.000s

3条回答
放我归山
2楼-- · 2020-05-27 14:02

It seems to me that the easiest way to do this and sidestep the multithreading problems would be to set a kill flag from the main thread, and check for it in the script-running thread just before the communication, killing the script when the flag is True.

查看更多
我命由我不由天
3楼-- · 2020-05-27 14:06

I think the problem is that process.kill() only kills the immediate child process (bash), not the sub-processes of the bash script.

The problem and solution are described here:

Use Popen(..., preexec_fn=os.setsid) to create a process group and os.pgkill to kill the entire process group. eg

import os
import signal
import subprocess
import time
from threading import Thread

process = None

def executeCommand(command, runCommand):
    Thread(target=runCommand, args=(command,)).start()

def runCommand(command):
    global process
    args = command.strip().split()
    process = subprocess.Popen(
        args, shell=False, stdout=subprocess.PIPE, preexec_fn=os.setsid)

    for line in process.communicate():
        if line:
            print "process:", line,

if __name__ == '__main__':
    executeCommand("./ascript.sh", runCommand)
    time.sleep(1)
    os.killpg(process.pid, signal.SIGKILL)

$ time python poc.py 
process: sleeping five

real    0m1.051s
user    0m0.032s
sys 0m0.020s
查看更多
我命由我不由天
4楼-- · 2020-05-27 14:09

It looks like you may be a victim of Python's super coarse grained concurrency. Change your script to this:

#!/bin/bash
echo "sleeping five"
sleep 5
echo "sleeping five again"
sleep 5
echo "slept five"

And then the output becomes:

process: sleeping five

real    0m5.134s
user    0m0.000s
sys     0m0.010s

If the entire script ran, the time would be 10s. So it looks like the python control thread doesn't actually run until after the bash script sleeps. Similarly, if you change your script to this:

#!/bin/bash
echo "sleeping five"
sleep 1
sleep 1
sleep 1
sleep 1
sleep 1
echo "slept five"

Then the output becomes:

process: sleeping five

real    0m1.150s
user    0m0.010s
sys     0m0.020s

In short, your code works as logically implemented. :)

查看更多
登录 后发表回答