How to wait in a bash script for several subprocesses spawned from that script to finish and return exit code !=0 when any of the subprocesses ends with code !=0 ?
Simple script:
#!/bin/bash
for i in `seq 0 9`; do
doCalculations $i &
done
wait
The above script will wait for all 10 spawned subprocesses, but it will always give exit status 0 (see help wait
). How can I modify this script so it will discover exit statuses of spawned subprocesses and return exit code 1 when any of subprocesses ends with code !=0?
Is there any better solution for that than collecting PIDs of the subprocesses, wait for them in order and sum exit statuses?
Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.
solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'
status code '127' is for non-existing process which means the child might have exited.
set -m
allows you to use fg & bg in a scriptfg
, in addition to putting the last process in the foreground, has the same exit status as the process it foregroundswhile fg
will stop looping when anyfg
exits with a non-zero exit statusunfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)
I think that the most straight forward way to run jobs in parallel and check for status is using temporary files. There are already a couple similar answers (e.g. Nietzche-jou and mug896).
The above code is not thread safe. If you are concerned that the code above will be running at the same time as itself, it's better to use a more unique file name, like fail.$$. The last line is to fulfill the requirement: "return exit code 1 when any of subprocesses ends with code !=0?" I threw an extra requirement in there to clean up. It may have been clearer to write it like this:
Here is a similar snippet for gathering results from multiple jobs: I create a temporary directory, story the outputs of all the sub tasks in a separate file, and then dump them for review. This doesn't really match the question - I'm throwing it in as a bonus:
How about simply:
Update:
As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by @Bryan, @SamBrightman, and others:
I've had a go at this and combined all the best parts from the other examples here. This script will execute the
checkpids
function when any background process exits, and output the exit status without resorting to polling.