Get the output of multiple commands from subproces

2019-06-10 02:36发布

问题:

I am trying to run a command, get it's output, then later run another command in the same environment (say if I set an environment variable in the first command, I want it to be available to the second command). I tried this:

import subprocess

process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE);

process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr  = process.communicate()
print "stdout: " + str(stdout)

# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr = process.communicate()
print "stdout: " + str(stdout)

but communicate() reads until the end, so this is not a valid technique. (I get this:)

stdout: Test

Traceback (most recent call last):
  File "./MultipleCommands.py", line 15, in <module>
    process.stdin.write("echo $MyVar\n")
ValueError: I/O operation on closed file

I have seen this: https://stackoverflow.com/a/15654218/284529 , but it doesn't give a working example of how to do what it proposes. Can anyone demonstrate how to do this? I have also seen other techniques that involve constantly checking for output in a loop, but this doesn't fit the "get the output of a command" mentality - it is just treating it like a stream.

回答1:

To get the output of multiple commands, just combine them into a single script:

#!/usr/bin/env python
import subprocess
import sys

output = subprocess.check_output("""
export MyVar="Test"
echo $MyVar
echo ${MyVar/est/ick}
""", shell=True, executable='/bin/bash', universal_newlines=True)
sys.stdout.write(output)

Output

Test
Tick


回答2:

communicate and wait methods of Popen objects, close the PIPE after the process returns. If you want stay in communication with the process try something like this:

import subprocess

process = subprocess.Popen("/bin/bash", shell=True, stdin=subprocess.PIPE,       stdout=subprocess.PIPE, stderr=subprocess.PIPE);

process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.flush()

process.stdout.readline()

process.stdin.write("echo $MyVar\n")
process.stdin.flush()

stdout, stderr = process.communicate()
print "stdout: " + str(stdout)

I think you misunderstand communicate...

Take a look over this link:- http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate

communicate sends a string to the other process and then waits on it to finish... (Like you said waits for the EOF listening to the stdout & stderror)

What you should do instead is:

proc.stdin.write('message')

# ...figure out how long or why you need to wait...

proc.stdin.write('message2')

(and if you need to get the stdout or stderr you'd use proc.stdout or proc.stderr)



回答3:

When using communicate it sees that subprocess had ended, but in case you have a intermediate one (bash), when your sub-subprocess ends, you have to somehow signal manually.

As for the rest, a simplest approach is to just emit a marker line. However, I'm sorry to disappoint you here but pooling (i.e. constantly checking in a loop) is actually the only sane option. If you don't like the loop, you could "hide" it away in a function.

import subprocess
import time

def readlines_upto(stream, until="### DONE ###"):
    while True:
        line = stream.readline()
        if line is None:
            time.sleep(0.1)
            continue
        if line.rstrip() == until:
            break
        yield line

process = subprocess.Popen("/bin/bash", shell=True,
    stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.stdin.write("export MyVar=\"Test\"\n")
process.stdin.write("echo $MyVar\n")
process.stdin.write("echo '### DONE ###'\n")
process.stdin.flush()

# Note, I don't read stderr here, so if subprocess outputs too much there,
# it'll fill the pipe and stuck. If you don't need stderr data, don't
# redirect it to a pipe at all. If you need it, make readlines read two pipes.
stdout = "".join(line for line in readlines_upto(process.stdout))
print "stdout: " + stdout

# Do it again
process.stdin.write("echo $MyVar\n")
process.stdin.flush()
stdout, stderr = process.communicate()
print "stdout: " + str(stdout)


回答4:

As per the manual:

Popen.communicate(input=None)

    Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. [...]

You need to read from the pipe instead:

import os
stdout = os.read(process.stdout.fileno(), 1024)
print "stdout: " + stdout

If there's no data waiting, it will hang there forever or until data is ready to be read. You should use the select system call to prevent that:

import select
import os

try:
    i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
    stdout = os.read(i[0].fileno(), 1024)
except IndexError:
    # nothing was written to the pipe in 5 seconds
    stdout = ""

print "stdout: " + stdout

If you want to fetch multiple writes, to avoid race conditions, you'll have to put it in a loop:

stdout = ""
while True:
    try:
        i,o,e = select.select([process.stdout], [], [], 5) # 5 second timeout
        stdout += os.read(i[0].fileno(), 1024)
    except IndexError:
        # nothing was written to the pipe in 5 seconds, we're done here
        break