I am wrapping a fastcgi app in a bash script like this:
#!/bin/bash
# stuff
./fastcgi_bin
# stuff
As bash only executes traps for signals when the foreground script ends I can't just kill -TERM scriptpid
because the fastcgi app will be kept alive.
I've tried sending the binary to the background:
#!/bin/bash
# stuff
./fastcgi_bin &
PID=$!
trap "kill $PID" TERM
# stuff
But if I do it like this, apparently the stdin and stdout aren't properly redirected because it does not connect with lighttpds mod_fastgi, the foreground version does work.
EDIT: I've been looking at the problem and this happens because bash redirects /dev/null to stdin when a program is launched in the background, so any way of avoiding this should solve my problem as well.
Any hint on how to solve this?
There are some options that come to my mind:
When a process is launched from a shell script, both belong to the same process group. Killing the parent process leaves the children alive, so the whole process group should be killed. This can be achieved by passing the negated PGID (Process Group ID) to kill, which is the same as the parent's PID. ej:
kill -TERM -$PARENT_PID
Do not execute the binary as a child, but replacing the script process with
exec
. You lose the ability to execute stuff afterwards though, becauseexec
completely replaces the parent process.Do not kill the shell script process, but the FastCGI binary. Then, in the script, examine the return code and act accordingly. e.g:
./fastcgi_bin || exit -1
Depending on how mod_fastcgi handles worker processes, only the second option might be viable.
I wrote this script just minutes ago to kill a bash script and all of its children...
Make sure the script is executable, or you will get an error on the $0 $i
Try keeping the original
stdin
using./fastcgi_bin 0<&0 &
:You can override the implicit
</dev/null
for a background process by redirecting stdin yourself, for example:You can do that with a coprocess.
Edit: well, coprocesses are background processes that can have stdin and stdout open (because bash prepares fifos for them). But you still need to read/write to those fifos, and the only useful primitive for that is bash's
read
(possibly with a timeout or a file descriptor); nothing robust enough for a cgi. So on second thought, my advice would be not to do this thing in bash. Doing the extra work in the fastcgi, or in an http wrapper like WSGI, would be more convenient.I have no idea if this is an option for you or not, but since you have a bounty I am assuming you might go for ideas that are outside the box.
Could you rewrite the bash script in Perl? Perl has several methods of managing child processes. You can read
perldoc perlipc
and more specifics in the core modulesIPC::Open2
andIPC::Open3
.I don't know how this will interface with lighttpd etc or if there is more functionality in this approach, but at least it gives you some more flexibility and some more to read in your hunt.