I've been using a simple bash preamble like this in my scripts:
#!/bin/bash
set -e
In conjunction with modularity / using functions this has bitten me today.
So, say I have a function somewhere like
foo() {
#edit: some error happens that make me want to exit the function and signal that to the caller
return 2
}
Ideally I'd like to be able to use multiple small files, include their functions in other files and then call these functions like
set +e
foo
rc=$?
set -e
. This works for exactly two layers of routines. But if foo is also calling subroutines like that, the last setting before the return will be set -e
, which will make the script exit on the return - I cannot override this in the calling function. So, what I had to do is
foo() {
#calling bar() in a shielded way like above
#..
set +e
return 2
}
Which I find very counterintuitive (and also not what I want - what if in some contexts I'd like to use the function without shielding against failures, while in other contexts I want to handle the cleanup?) What's the best way to handle this? Btw. I'm doing this on OSX, I haven't tested whether this behaviour is different on Linux.
Shell functions don't really have "return values", just exit codes.
You could add
&& :
to the caller, this makes the command "tested", and won't exit it:The
:
is the "null command" (ie. it doesn't do anything). In this case it doesn't even get executed, since it only gets run iffoo
return 0 (which it doesn't).This outputs:
It's arguably a bit ugly, but then again, all of shell scripting is arguably a bit ugly ;-)
Quoting
sh(1)
from FreeBSD, which explains this better than bash's man page: