pipe stdout and stderr to two different processes

2019-01-08 08:09发布

问题:

I've a pipline doing just

 command1 | command2

So, stdout of command1 goes to command2 , while stderr of command1 go to the terminal (or wherever stdout of the shell is).

How can I pipe stderr of command1 to a third process (command3) while stdout is still going to command2 ?

回答1:

Use another file descriptor

{ command1 2>&3 | command2; } 3>&1 1>&2 | command3

You can use up to 7 other file descriptors: from 3 to 9.
If you want more explanation, please ask, I can explain ;-)

Test

{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'

output:

b2
a1

Example

Produce two log files:
1. stderr only
2. stderr and stdout

{ { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log

If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:

$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err

==> err-and-stdout.log <==
out
err


回答2:

The accepted answer results in the reversing of stdout and stderr. Here's a method that preserves them (since Googling on that purpose brings up this post):

{ command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command

Notice:

  • 3>&- is required to prevent fd 3 from being inherited by command. (As this can lead to unexpected results depending on what command does inside.)

Parts explained:

  1. Outer part first:

    1. 3>&1 -- fd 3 for { ... } is set to what fd 1 was (i.e. stdout)
    2. 1>&2 -- fd 1 for { ... } is set to what fd 2 was (i.e. stderr)
    3. | stdout_command -- fd 1 (was stdout) is piped through stdout_command
  2. Inner part inherits file descriptors from the outer part:

    1. 2>&1 -- fd 2 for command is set to what fd 1 was (i.e. stderr as per outer part)
    2. 1>&3 -- fd 1 for command is set to what fd 3 was (i.e. stdout as per outer part)
    3. 3>&- -- fd 3 for command is set to nothing (i.e. closed)
    4. | stderr_command -- fd 1 (was stderr) is piped through stderr_command

Example:

foo() {
    echo a
    echo b >&2
    echo c
    echo d >&2
}

{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'

Output:

out: a
err: b
err: d
out: c

(Order of a -> c and b -> d will always be indeterminate because there's no form of synchronization between stderr_command and stdout_command.)



回答3:

Simply redirect stderr to stdout

{ command1 | command2; } 2>&1 | command3

Caution: commnd3 will also read command2 stdout (if any).
To avoid that, you can discard commnd2 stdout:

{ command1 | command2 >/dev/null; } 2>&1 | command3

However, to keep command2 stdout (e.g. in the terminal),
then please refer to my other answer more complex.

Test

{ { echo -e "a\nb\nc" >&2; echo "----"; } | sed 's/$/1/'; } 2>&1 | sed 's/$/2/'

output:

a2
b2
c2
----12


回答4:

Using process substitution:

command1 > >(command2) 2> >(command3)

See http://tldp.org/LDP/abs/html/process-sub.html for more info.



回答5:

The same effect can be accomplished fairly easily with a fifo. I'm not aware of a direct piping syntax for doing it (though it would be nifty to see one). This is how you might do it with a fifo.

First, something that prints to both stdout and stderr, outerr.sh:

#!/bin/bash

echo "This goes to stdout"
echo "This goes to stderr" >&2

Then we can do something like this:

$ mkfifo err
$ wc -c err &
[1] 2546
$ ./outerr.sh 2>err | wc -c
20
20 err
[1]+  Done                    wc -c err

That way you set up the listener for stderr output first and it blocks until it has a writer, which happens in the next command, using the syntax 2>err. You can see that each wc -c got 20 characters of input.

Don't forget to clean up the fifo after you're done if you don't want it to hang around (i.e. rm). If the other command wants input on stdin and not a file arg, you can use input redirection like wc -c < err too.