This question already has an answer here:
-
How to wait in bash for several subprocesses to finish and return exit code !=0 when any subprocess ends with code !=0?
27 answers
Background
I'm working on a bash script to automate the process of building half a dozen projects that live in the same directory. Each project has two scripts to run in order to build it:
npm install
npm run build
The first line will fetch all of the dependencies from npm. Since this step takes the longest, and since the projects can fetch their dependencies simultaneously, I'm using a background job to fetch everything in parallel. (ie: npm install &
)
The second line will use those dependencies to build the project. Since this must happen after all the Step 1s finish, I'm running the wait
command in between. See code snippet below.
The Question
I would like to have my script exit as soon as an error occurs in any of the background jobs, or the npm run build
step that happens afterward.
I'm using set -e
, however this does not apply to the background jobs, and thus if one project fails to install it's dependencies, everything else keeps going.
Here is an simplified example of how my script looks right now.
build.sh
set -e
DIR=$PWD
for dir in ./projects/**/
do
echo -e "\033[4;32mInstalling $dir\033[0m"
cd $dir
npm install & # takes a while, so do this in parallel
cd $DIR
done
wait # continue once the background jobs are completed
for dir in ./projects/**/
do
cd $dir
echo -e "\033[4;32mBuilding $dir\033[0m"
npm run build # Some projects use other projects, so build them in series
cd $DIR
echo -e "\n"
done
Again, I don't want to continue doing anything in the script if an error occurs at any point, this applies to both the parent and background jobs. Is this possible?
Collect the PIDs for the background jobs; then, use wait
to collect the exit status of each, exiting the first time any PID polled over in that loop is nonzero.
install_pids=( )
for dir in ./projects/**/; do
(cd "$dir" && exec npm install) & install_pids+=( $! )
done
for pid in "${install_pids[@]}"; do
wait "$pid" || exit
done
The above, while simple, has a caveat: If an item late in the list exits nonzero prior to items earlier in the list, this won't be observed until that point in the list is polled. To work around this caveat, you can repeatedly iterate through the entire list:
install_pids=( )
for dir in ./projects/**/; do
(cd "$dir" && exec npm install) & install_pids+=( $! )
done
while (( ${#install_pids[@]} )); do
for pid_idx in "${!install_pids[@]}"; do
pid=${install_pids[$pid_idx]}
if ! kill -0 "$pid" 2>/dev/null; then # kill -0 checks for process existance
# we know this pid has exited; retrieve its exit status
wait "$pid" || exit
unset "install_pids[$pid_idx]"
fi
done
sleep 1 # in bash, consider a shorter non-integer interval, ie. 0.2
done
However, because this polls, it incurs extra overhead. This can be avoided by trapping SIGCHLD and referring to jobs -n
(to get a list of jobs whose status changed since prior poll) when the trap is triggered.
Bash isn't made for parallel processing such as this. To accomplish what you want, I had to write a function library. I'd suggest seeking a language more readily suited to this if possible.
The problem with looping through the pids, such as this...
#!/bin/bash
pids=()
f() {
sleep $1
echo "no good"
false
}
t() {
sleep $1
echo "good"
true
}
t 3 &
pids+=$!
f 1 &
pids+=$!
t 2 &
pids+=$!
for p in ${pids[@]}; do
wait $p || echo failed
done
The problem is that "wait" will wait on the first pid, and if the other pids finish before the first one does, you'll not catch the exit code. The code above shows this problem on bash v4.2.46. The false command should produce output that never gets caught.