'grep -q' not exiting with 'tail -f

2020-05-24 04:49发布

I am trying to implement a script that wait for a specific message in a log file. Once the message is logged then I want to continue the script.

Here's what I am trying out with tail -f and grep -q:

# tail -f logfile | grep -q 'Message to continue'

The grep never quit and so it waits forever even if 'Message to continue' is logged in the file.

When I run this without -f it seems to work fine.

标签: linux bash grep
6条回答
Fickle 薄情
2楼-- · 2020-05-24 05:03

After some experimentation, I believe the problem is in the way that bash waits for all the processes in a pipeline to quit, in some shape or form.

With a plain file 'qqq' of some 360 lines of C source (a variety of program concatenated several times over), and using 'grep -q return', then I observe:

  1. tail -n 300 qqq | grep -q return does exit almost at once.
  2. tail -n 300 -f qqq | grep -q return does not exit.
  3. tail -n 300 -f qqq | strace -o grep.strace -q return does not exit until interrupted. The grep.strace file ends with:

    read(0, "#else\n#define _XOPEN_SOURCE 500\n"..., 32768) = 10152
    close(1)                                = 0
    exit_group(0)                           = ?
    

    This is one leads me to think that grep has exited before the interrupt kills tail; if it was waiting for something, there would be an indication that it received a signal.

  4. A simple program that simulates what the shell does, but without the waiting, indicates that things terminate.

    #define _XOPEN_SOURCE 600
    #include <stdlib.h>
    #include <unistd.h>
    #include <stdarg.h>
    #include <errno.h>
    #include <string.h>
    #include <stdio.h>
    
    static void err_error(const char *fmt, ...)
    {
        int errnum = errno;
        va_list args;
        va_start(args, fmt);
        vfprintf(stderr, fmt, args);
        va_end(args);
        if (errnum != 0)
            fprintf(stderr, "%d: %s\n", errnum, strerror(errnum));
        exit(1);
    }
    
    int main(void)
    {
        int p[2];
        if (pipe(p) != 0)
            err_error("Failed to create pipe\n");
        pid_t pid;
        if ((pid = fork()) < 0)
            err_error("Failed to fork\n");
        else if (pid == 0)
        {
            char *tail[] = { "tail", "-f", "-n", "300", "qqq", 0 };
            dup2(p[1], 1);
            close(p[0]);
            close(p[1]);
            execvp(tail[0], tail);
            err_error("Failed to exec tail command");
        }
        else
        {
            char *grep[] = { "grep", "-q", "return", 0 };
            dup2(p[0], 0);
            close(p[0]);
            close(p[1]);
            execvp(grep[0], grep);
            err_error("Failed to exec grep command");
        }
        err_error("This can't happen!\n");
        return -1;
    }
    

    With a fixed size file, tail -f isn't going to exit - so the shell (bash) seems to hang around.

  5. tail -n 300 -f qqq | grep -q return hung around, but when I used another terminal to add another 300 lines to the file qqq, the command exited. I interpret this as happening because grep had exited, so when tail wrote the new data to the pipe, it got a SIGPIPE and exited, and bash therefore recognized that all the processes in the pipeline were dead.

I observed the same behaviour with both ksh and bash. This suggests it is not a bug but some expected behaviour. Testing on Linux (RHEL 5) on an x86_64 machine.

查看更多
该账号已被封号
3楼-- · 2020-05-24 05:03
tail -f logfile | grep  --max-count=1  -q 'Message to continue'

Admittedly, it exits when the next line is read, not immediately on the matched one.

查看更多
我想做一个坏孩纸
4楼-- · 2020-05-24 05:14

I thought I'd post this as an answer since it explains why the command exits after a second write to the file:

touch xxx
tail -f xxx | grep -q 'Stop'
ps -ef |grep 'grep -q'
# the grep process is there
echo "Stop" >> xxx
ps -ef|grep 'grep -q'
# the grep process actually DID exit
printf "\n" >> xxx
# the tail process exits, probably because it receives a signal when it 
# tries to write to a closed pipe
查看更多
干净又极端
5楼-- · 2020-05-24 05:17

That's because tail with the -f (follow) option doesn't quit, and continues to provide output to grep. Waiting for lines in a log file would probably be easier with perl/python.

Launch tail -f with the Python subprocess module. Read output from tail in a loop until you see the lines you want then exit the Python script. Put this solution inside your shell script.

The Python script will block the shell script until the desired lines are seen.

查看更多
▲ chillily
6楼-- · 2020-05-24 05:17

I was searching for the answer to this for my own project. Trying to test when exactly the passed through GPU becomes active on a VMware ESXi VM. Multiple variations of the same question are everywhere. This one is pretty recent. I figured out a way to fool it, and if you can live with your interesting line repeated in the log then:

tail -n 1 -f /var/log/vmkernel.log | grep -m 1 IOMMUIntel >>/var/log/ vmkernel.log

This tails the log, one line at a time, grep checks each line for first occurrence, and appends it to the log then tail quits immediately.

If you like VMware passthough hacking, read more here: http://hackaday.io/project/1071-the-hydra-multiheaded-virtual-computer

查看更多
Luminary・发光体
7楼-- · 2020-05-24 05:20

tail -f will read a file and display lines later added, it will not terminate (unless a signal like SIGTERM is sent). grep is not the blocking part here, tail -f is. grep will read from the pipe until it is closed, but it never is because tail -f does not quit and keep the pipe open.


A solution to your problem would probably be (not tested and very likely to perform badly):

tail -f logfile | while read line; do
  echo $line | grep -q 'find me to quit' && break;
done
查看更多
登录 后发表回答