What's a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?
相关问题
- How to get the return code of a shell script in lu
- JQ: Select when attribute value exists in a bash a
- Invoking Mirth Connect CLI with Powershell script
- Emacs shell: save commit message
- bash print whole line after splitting line with if
相关文章
- 使用2台跳板机的情况下如何使用scp传文件
- In IntelliJ IDEA, how can I create a key binding t
- Check if directory exists on remote machine with s
- shell中反引号 `` 赋值变量问题
- How get the time in milliseconds in FreeBSD?
- Reverse four length of letters with sed in unix
- Launch interactive SSH bash session from PHP
- BASH: Basic if then and variable assignment
You need an atomic operation, like flock, else this will eventually fail.
But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.
So the code is:
You need to take care of stale locks else aftr a crash your script will never run again.
The semaphoric utility uses
flock
(as discussed above, e.g. by presto8) to implement a counting semaphore. It enables any specific number of concurrent processes you want. We use it to limit the level of concurrency of various queue worker processes.It's like sem but much lighter-weight. (Full disclosure: I wrote it after finding the sem was way too heavy for our needs and there wasn't a simple counting semaphore utility available.)
The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.
Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:
The
noclobber
option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution
Take a look to FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/: you can synchronize commands and/or scripts using abstract resources that does not need lock files in a filesystem. You can synchronize commands running in different systems without a NAS (Network Attached Storage) like an NFS (Network File System) server.
Using the simplest use case, serializing "command1" and "command2" may be as easy as executing:
and
from two different shell scripts.