How do I loop vagrant provisioning in multi-machin

2019-04-08 17:47发布

问题:

I have a multi-machine Vagrantfile setting up a 5 node environment.

I've been looking around to see what levels of control you have over the order of provisioning, but it's pretty limited:

https://docs.vagrantup.com/v2/multi-machine/

I want to provision 5 nodes, then go back to the first node, and run other provisioning steps there.

What I mean is that you have a Vagrantfile like this:

Vagrant.configure('2') do |config|
   config.vm.provision some stuff

   config.vm.define 'node1' do |node1|
      node1.vm.provision some more stuff
   end

   config.vm.define 'node2' do |node2|
      node2.vm.provision some other stuff
   end

   ... node3 node4 node 5 ...
end

But after vagrant has finished starting and provisioning the all the machines up to node5, I then want to run another provisioner on node1. Does anyone know how to do this? Maybe some ruby hackery?

回答1:

If what you want is to have that other provisioner to run automagically right after all the machines have been vagrant uped, unfortunately there's no way to do that as far as I know, Vagrant will allways run all the provisioners specified (unless you tell it to run just a subset of them).

The only way you might be able to emulate would be having different kinds of provisioners for each machine and selectively running them as needed. So, for example, you'd vagrant up --provision --provision-with=shell and then run a vagrant provision --provision-with chef_solo to have the shell provisioners run first and the chef_solo provisioning afterwards

But, if you want to manually fire up a provisioner after all the machines have been brought up you can just use the vagrant provision command to accomplish that.



回答2:

One possible way of doing this is to execute commands between machines from ssh. The only additional thing you need to do is copy the vagrant insecure private key to each of the guests.

Then you can ssh between the machines in your cluster (always handy) and also do stuff like this:

Vagrant.configure('2') do |config|
config.vm.provision some stuff

config.vm.define 'node1' do |node1|
  node1.vm.provision some more stuff
end

config.vm.define 'node2' do |node2|
  node2.vm.provision "shell", inline: "/vagrant/bootstrap-webserver.sh"
  node2.vm.provision "shell", inline: "ssh vagrant@node1 sh /vagrant/trigger-build.sh"
end

config.vm.define 'node3' do |node3|
  node3.vm.provision "shell", inline: "/vagrant/create-database.sh"
  node3.vm.provision "shell", inline: "ssh vagrant@node1 sh /vagrant/initialise-database.sh"
end

... node4 node 5 ...
end

You probably also want to set "PasswordAuthentication no" in your sshd_config on the guests and add "-o StrictHostKeyChecking=no" to the ssh command above to get it to work.



回答3:

If you want to make this even easier use the sshpass command instead of ssh so that you don't need to worry about keys.

node3.vm.provision "shell", inline: sshpass -pvagrant ssh -oStrictHostKeyChecking=no vagrant@node1 "sudo sh /vagrant/initialise-database.sh"

The command assumes your virtual box has the vagrant user with the password vagrant and has sudo access.



回答4:

I think the accepted answer is not the best method.

What you want to do is create a list of named nodes, and have the one that you want to have the final provisioning done in last in the list, like this.

NODES = [
    { :hostname => "api1", :ip => "192.168.0.11" },
    { :hostname => "api2", :ip => "192.168.0.12" },
    { :hostname => "controller", :ip => "192.168.0.2" }
]

Vagrant.configure("2") do |config|
    // Do whatever global config here
    NODES.each do |node|
        config.vm.define node[:hostname] do |nodeconfig|
            nodeconfig.vm.hostname = node[:hostname]
            // Do config that is the same across each node
            if node[:hostname] == "controller"
                // Do your provisioning for this machine here
            else
                // Do provisioning for the other machines here
            end
        end
    end
    // Do any global provisioning
end

The global provisioning will happen first for each node, and the scoped provisioning will come next. By placing the controller at the end of the list, it will be the last to have its scoped provisioning ran. You can stage them by changing their list order, and creating a conditional. This is how I have mine setup so that ssh keys can be copied to my nodes, and my Ansible controller get run last. This lets the remaining machines get configured via Ansible as the last step.