So I asked a question before that added a prefix to a ping. (My last questions) that left me with the following line:
ping 8.8.8.8 | while read line; do echo "$(date): $line"; done | grep time=
And this works GREAT. I have only one problem I can not get this to save it in a file. I tried it with just a simple redirect like so:
ping 8.8.8.8 | while read line; do echo "$(date): $line"; done | grep time= >> googleping
But nothing gets saved in the file...
Then I tried this:
ping 8.8.8.8 | while read line; do echo "$(date): $line"; done | grep time= | tee -a googleping
with tee to print it on screen and also save it in the file... no luck again.
(But tried echo hello | tee -a googleping
and it worked fine...)
So and then I tried another while loop like so:
ping 8.8.8.8 | while read line; do echo "$(date): $line"; done | grep time= | while read line; do echo $line; echo $line >> googleping; done
No luck again...
So is there a limit on how many pipes and redirects one line can have? And if so is there a way that I can still achive my goal of logging when I can't reach google (I just tested it with grep time=
to have garuenteed output and will use grep -v time=
to get all lines that have no time in them no matter waht the error may be)
So to add is that in the end I want to do in in the mac terminal, but I tried it on ubuntu server and a mac, and neither work with any of the methodes described above.
I hope someone can help me!
The answer by hek2mgl explains that your particular issue is unrelated to the number of pipes.
But to answer the question in your title ("Is there a limit on how many pipes I can use?"), yes there is a limit on opened file descriptors, but on current systems it is significantly big (several thousands) in practice. AFAIU, POSIX guarantee a small limit (perhaps only 20). Your system has probably a file descriptor limit per process, and another file descriptor limit system wide....
To set or query that per-process file descriptor limit, you might use setrlimit(2) and
getrlimit
withRLIMIT_NOFILE
(and theulimit
builtin ofbash
, or thelimit
builtin ofzsh
, in your interactive shell). You can also read/proc/self/limits
on Linux (see proc(5) for more on/proc/
pseudo-files). On my Linux Debian system I have 65536 maximum file descriptors per process.IIRC,
/proc/sys/fs/file-max
gives the maximum number of opened file descriptors. On My system, it is 1632058 right now.When you have reached the file descriptor limit, the pipe(2) syscall (e.g. done by your shell for pipelines with
|
....) would fail with:And open(2) can also fail for disk quota excess...
See also pipe(7) and read Advanced Linux Programming; when programming in C, you should use fflush(3) appropriately and wisely.
This is not related to the number of pipes you are using. You don't see output from tee immediately because of output buffering. When grep's ouput is going to a pipe instead of a terminal, it get's block buffered. If you have enough patience, the output will appear after a while (once the buffer get's flushed).
This behaviour is implemented in the libc unless a program explicitly handles buffering on it's own. You can influence this behaviour using the
stdbuf
command:I'm calling
grep
usingstdbuf -o0
which shrinks the output buffer size of grep to zero length. Alternatively you can use-oL
which produces line buffered output.Sidenote:
stdbuf
works for your example. But if you read the man page ofstdbuf
carefully, you'll notice:That's what I told above.
stdbuf
works only with programs which doesn't handle buffering on their own.tee
is such a program. Meaning if you further pipe from tee you cannot usestdbuf
.