bash script parallel ssh remote command

2020-06-17 06:36发布

问题:

i have a script that fires remote commands on several different machines through ssh connection. Script goes something like:

for server in list; do
echo "output from $server"
ssh to server execute some command
done

The problem with this is evidently the time, as it needs to establish ssh connection, fire command, wait for answer, print it. What i would like is to have script that would try to establish connections all at once and return echo "output from $server" and output of command as soon as it gets it, so not necessary in the list order.

I've been googling this for a while but didn't find an answer. I cannot cancel ssh session after command run as one thread suggested, because i need an output and i cannot use parallel gnu suggested in other threads. Also i cannot use any other tool, i cannot bring/install anything on this machine, only useable tool is GNU bash, version 4.1.2(1)-release.

Another question is how are ssh sessions like this limited? If i simply paste 5+ or so lines of "ssh connect, do some command" it actually doesn't do anything, or execute only on first from list. (it works if i paste 3-4 lines). Thank you

回答1:

Have you tried this?

for server in list; do
  ssh user@server "command" &
done
wait
echo finished

Update: Start subshells:

for server in list; do
  (echo "output from $server"; ssh user@server "command"; echo End $server) &
done
wait
echo All subshells finished


回答2:

There are several parallel SSH tools that can handle that for you:

  • http://code.google.com/p/pdsh/
  • http://sourceforge.net/projects/clusterssh/
  • http://code.google.com/p/sshpt/
  • http://code.google.com/p/parallel-ssh/

Also, you could be interested in configuration deployment solutions such as Chef, Puppet, Ansible, Fabric, etc (see this summary ).

A third option is to use a terminal broadcast such as pconsole

If you only can use GNU commands, you can write your script like this:

for server in $servers ; do
   ( { echo "output from $server" ; ssh user@$server "command" ; } | \
    sed -e "s/^/$server:/" ) & 
done
wait 

and then sort the output to reconcile the lines.



回答3:

I started with the shell hacks mentionned in this thread, then proceeded to something somewhat more robust : https://github.com/bearstech/pussh

It's my daily workhorse, and I basically run anything against 250 servers in 20 seconds (it's actually rate limited otherwise the connection rate kills my ssh-agent). I've been using this for years.

See for yourself from the man page (clone it and run 'man ./pussh.1') : https://github.com/bearstech/pussh/blob/master/pussh.1

Examples

Show all servers rootfs usage in descending order :

pussh -f servers df -h / |grep /dev |sort -rn -k5

Count the number of processors in a cluster :

pussh -f servers grep ^processor /proc/cpuinfo |wc -l

Show the processor models, sorted by occurence :

pussh -f servers sed -ne "s/^model name.*: //p" /proc/cpuinfo |sort |uniq -c

Fetch a list of installed package in one file per host :

pussh -f servers -o packages-for-%h dpkg --get-selections

Mass copy a file tree (broadcast) :

tar czf files.tar.gz ... && pussh -f servers -i files.tar.gz tar -xzC /to/dest

Mass copy several remote file trees (gather) :

pussh -f servers -o '|(mkdir -p %h && tar -xzC %h)' tar -czC /src/path .

Note that the pussh -u feature (upload and execute) was the main reason why I programmed this, no tools seemed to be able to do this. I still wonder if that's the case today.



回答4:

You may like the parallel-ssh project with the pssh command:

pssh -h servers.txt -l user command

It will output one line per server when the command is successfully executed. With the -P option you can also see the output of the command.