starting remote script via ssh containing nohup

2019-04-27 00:56发布

问题:

I want to start a script remotely via ssh like this:

ssh user@remote.org -t 'cd my/dir && ./myscript data my@email.com'

The script does various things which work fine until it comes to a line with nohup:

nohup time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 &

it is supposed to do start the program myprog, pipe its output to mylog and send an email with some datafiles created by myprog as attachment and the log as body. Though when the script reaches this line, ssh outputs:

Connection to remote.org closed.

What is the problem here?

Thanks for any help

回答1:

Your command runs a pipeline of processes in the background, so the calling script will exit straight away (or very soon afterwards). This will cause ssh to close the connection. That in turn will cause a SIGHUP to be sent to any process attached to the terminal that the -t option caused to be created.

Your time ./myprog process is protected by a nohup, so it should carry on running. But your mutt isn't, and that is likely to be the issue here. I suggest you change your command line to:

nohup sh -c "time ./myprog $1 >my.log && mutt -a ${1%.*}/`basename $1` -a ${1%.*}/`basename ${1%.*}`.plt $2 < my.log 2>&1 " &

so the entire pipeline gets protected. (If that doesn't fix it it may be necessary to do something with file descriptors - for instance mutt may have other issues with the terminal not being around - or the quoting may need tweaking depending on the parameters - but give that a try for now...)



回答2:

This answer may be helpful. In summary, to achieve the desired effect, you have to do the following things:

  1. Redirect all I/O on the remote nohup'ed command
  2. Tell your local SSH command to exit as soon as it's done starting the remote process(es).

Quoting the answer I already mentioned, in turn quoting wikipedia:

Nohuping backgrounded jobs is for example useful when logged in via SSH, since backgrounded jobs can cause the shell to hang on logout due to a race condition [2]. This problem can also be overcome by redirecting all three I/O streams:

nohup myprogram > foo.out 2> foo.err < /dev/null &

UPDATE

I've just had success with this pattern:

ssh -f user@host 'sh -c "( (nohup command-to-nohup 2>&1 >output.file </dev/null) & )"'


回答3:

Managed to solve this for a use case where I need to start backgrounded scripts remotely via ssh using a technique similar to other answers here, but in a way I feel is more simple and clean (at least, it makes my code shorter and -- I believe -- better-looking), by explicitly closing all three streams using the stream-close redirection syntax (as discussed at the following locations:

  1. https://unix.stackexchange.com/questions/131801/closing-a-file-descriptor-vs

  2. https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21

  3. http://www.tldp.org/LDP/abs/html/io-redirection.html#CFD

  4. https://www.gnu.org/software/bash/manual/html_node/Redirections.html

Rather than the more widely used but (IMHO) hackier "redirect to/from /dev/null", resulting in the deceptively simple:

    nohup script.sh >&- 2>&- <&-&

2>&1 works just as well as 2>&-, but I feel the latter is ever-so-slightly more clear. ;) Most people might have a space preceding the final "background job" ampersand, but since it is not required (as the ampersand itself functions like a semicolon in normal usage), I prefer to omit it. :)