use ssh keys with passphrase on a vagrant+chef set

2019-02-03 06:47发布

I've got a vm running using vagrant, and I'm provisioning it with Chef. One of the steps involves cloning a git repo, but my ssh-key (on my host machine) has a passphrase on it.

When I run vagrant up, the process fails at the git clone step with the following error:
Permission denied (publickey). fatal: The remote end hung up unexpectedly
(The key has been added on the host machine, with the passphrase)

I tried to solve this with ssh agent forwarding by doing the following:
Added config.ssh.forward_agent = true to the VagrantFile
Added Defaults env_keep = "SSH_AUTH_SOCK to /etc/sudoers on the vm

Now, vagrant up still fails when it gets to the git clone part, but if I run vagrant provision after that, it passes. I'm guessing this is because the ssh configuration is set up when the vm is brought up and isn't reloaded

I have tried to reload ssh after adjusting those two settings, but that hasn't helped.

Any idea how to solve this?

Thanks.

3条回答
三岁会撩人
2楼-- · 2019-02-03 07:07

This may not be the answer that you're looking for, but an easy fix to this would be to generate a dedicated deployment ssh key without a passphrase. I prefer separate and dedicated deploy keys rather than a single key for multiple applications.

查看更多
欢心
3楼-- · 2019-02-03 07:15

As you noted, updating sudoers during the initial run is too late to be beneficial to that run as chef is already running under sudo by that point.

Instead I wrote a hacky recipe that finds the appropriate ssh socket to use and updates the SSH_AUTH_SOCK environment to suit. It also disables strict host key checking so the initial outbound connection is automatically approved.

Save this as a recipe that's executed anytime prior to the first ssh connection (tested with Ubuntu but should work with other distributions):

Directory "/root/.ssh" do
  action :create
  mode 0700
end

File "/root/.ssh/config" do
  action :create
  content "Host *\nStrictHostKeyChecking no"
  mode 0600
end

ruby_block "Give root access to the forwarded ssh agent" do
  block do
    # find a parent process' ssh agent socket
    agents = {}
    ppid = Process.ppid
    Dir.glob('/tmp/ssh*/agent*').each do |fn|
      agents[fn.match(/agent\.(\d+)$/)[1]] = fn
    end
    while ppid != '1'
      if (agent = agents[ppid])
        ENV['SSH_AUTH_SOCK'] = agent
        break
      end
      File.open("/proc/#{ppid}/status", "r") do |file|
        ppid = file.read().match(/PPid:\s+(\d+)/)[1]
      end
    end
    # Uncomment to require that an ssh-agent be available
    # fail "Could not find running ssh agent - Is config.ssh.forward_agent enabled in Vagrantfile?" unless ENV['SSH_AUTH_SOCK']
  end
  action :create
end

Alternatively create a box with the sudoers update already in it and base your future VMs off of that.

查看更多
手持菜刀,她持情操
4楼-- · 2019-02-03 07:26

You can run multiple provisioners with Vagrant (even of the same kind), each provisioner gets executed on its own SSH connection. I typically solve this problem by using a Shell provisioner that adds Added Defaults env_keep = "SSH_AUTH_SOCK" to /etc/sudoers on the vm.

Here's the Bash script I use to do just that:

#!/usr/bin/env bash

# Ensure that SSH_AUTH_SOCK is kept
if [ -n "$SSH_AUTH_SOCK" ]; then
  echo "SSH_AUTH_SOCK is present"
else
  echo "SSH_AUTH_SOCK is not present, adding as env_keep to /etc/sudoers"
  echo "Defaults env_keep+=\"SSH_AUTH_SOCK\"" >> "/etc/sudoers"
fi

I haven't tested this with the Chef provisioner, only with additional Shell provisioners... but from what I understand this should work the same for your use case.

查看更多
登录 后发表回答