Quick-and-dirty way to ensure only one instance of

2019-01-03 07:11发布

What's a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?

30条回答
何必那么认真
2楼-- · 2019-01-03 07:55
if [ 1 -ne $(/bin/fuser "$0" 2>/dev/null | wc -w) ]; then
    exit 1
fi
查看更多
相关推荐>>
3楼-- · 2019-01-03 08:01

You need an atomic operation, like flock, else this will eventually fail.

But what to do if flock is not available. Well there is mkdir. That's an atomic operation too. Only one process will result in a successful mkdir, all others will fail.

So the code is:

if mkdir /var/lock/.myscript.exclusivelock
then
  # do stuff
  :
  rmdir /var/lock/.myscript.exclusivelock
fi

You need to take care of stale locks else aftr a crash your script will never run again.

查看更多
手持菜刀,她持情操
4楼-- · 2019-01-03 08:01

The semaphoric utility uses flock (as discussed above, e.g. by presto8) to implement a counting semaphore. It enables any specific number of concurrent processes you want. We use it to limit the level of concurrency of various queue worker processes.

It's like sem but much lighter-weight. (Full disclosure: I wrote it after finding the sem was way too heavy for our needs and there wasn't a simple counting semaphore utility available.)

查看更多
Deceive 欺骗
5楼-- · 2019-01-03 08:01

The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.

查看更多
唯我独甜
6楼-- · 2019-01-03 08:02

Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:

lockfile=/var/lock/myscript.lock

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
  trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
  # or you can decide to skip the "else" part if you want
  echo "Another instance is already running!"
fi

The noclobber option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.

P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution

查看更多
够拽才男人
7楼-- · 2019-01-03 08:02

Take a look to FLOM (Free LOck Manager) http://sourceforge.net/projects/flom/: you can synchronize commands and/or scripts using abstract resources that does not need lock files in a filesystem. You can synchronize commands running in different systems without a NAS (Network Attached Storage) like an NFS (Network File System) server.

Using the simplest use case, serializing "command1" and "command2" may be as easy as executing:

flom -- command1

and

flom -- command2

from two different shell scripts.

查看更多
登录 后发表回答