What's a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?
相关问题
- How to get the return code of a shell script in lu
- JQ: Select when attribute value exists in a bash a
- Invoking Mirth Connect CLI with Powershell script
- Emacs shell: save commit message
- bash print whole line after splitting line with if
相关文章
- 使用2台跳板机的情况下如何使用scp传文件
- In IntelliJ IDEA, how can I create a key binding t
- Check if directory exists on remote machine with s
- shell中反引号 `` 赋值变量问题
- How get the time in milliseconds in FreeBSD?
- Reverse four length of letters with sed in unix
- Launch interactive SSH bash session from PHP
- BASH: Basic if then and variable assignment
You can use
GNU Parallel
for this as it works as a mutex when called assem
. So, in concrete terms, you can use:If you want a timeout too, use:
Timeout of <0 means exit without running script if semaphore is not released within the timeout, timeout of >0 mean run the script anyway.
Note that you should give it a name (with
--id
) else it defaults to the controlling terminal.GNU Parallel
is a very simple install on most Linux/OSX/Unix platforms - it is just a Perl script.The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.
I use a simple approach that handles stale lock files.
Note that some of the above solutions that store the pid, ignore the fact that the pid can wrap around. So - just checking if there is a valid process with the stored pid is not enough, especially for long running scripts.
I use noclobber to make sure only one script can open and write to the lock file at one time. Further, I store enough information to uniquely identify a process in the lockfile. I define the set of data to uniquely identify a process to be pid,ppid,lstart.
When a new script starts up, if it fails to create the lock file, it then verifies that the process that created the lock file is still around. If not, we assume the original process died an ungraceful death, and left a stale lock file. The new script then takes ownership of the lock file, and all is well the world, again.
Should work with multiple shells across multiple platforms. Fast, portable and simple.
Simply add
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
at the beginning of your script. It's a boilerplate code from man flock. To realize how it works i wrote a script and run it simultaneously from two consoles:I have not fully realized how it works, but it seems it runs itself again using itself as a lockfile.
FLOCKER
set to"$0"
just to set some notnull reasonable value.|| :
to do nothing if something went wrong.It seems to not work on Debian 7, but seems to work back again with experimental util-linux 2.25 package. It writes "flock: ... Text file busy". It could be overridden by disabling write permission on your script.
Use
flock(1)
to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.This ensures that code between
(
and)
is run only by one process at a time and that the process doesn’t wait too long for a lock.Caveat: this particular command is a part of
util-linux
. If you run an operating system other than Linux, it may or may not be available.