SSH Agent Forwarding with Ansible

2019-01-12 22:33发布

I’m using Ansible 1.5.3 and Git with ssh agent forwarding (https://help.github.com/articles/using-ssh-agent-forwarding). I can log into the server that I am managing with Ansible and test that my connection to git is correctly configured:

ubuntu@test:~$ ssh -T git@github.com
Hi gituser! You've successfully authenticated, but GitHub does not provide shell access.

I can also clone and update one of my repos using this account so my git configuration looks good and uses ssh forwarding when I log into my server directly via ssh.

The problem: When I attempt the same test shown above using the Ansible command module. It fails with “Permission denied”. Part of the Ansible output (with verbose logging) looks like this:

failed: [xxx.xxxxx.com] => {"changed": true, "cmd": ["ssh", "-T", "git@github.com"], "delta": "0:00:00.585481", "end": "2014-06-09 14:11:37.410907", "rc": 255, "start": "2014-06-09 14:11:36.825426"}
stderr: Permission denied (publickey).

Here is the simple playbook that runs this command:

- hosts: webservers
  sudo: yes
  remote_user: ubuntu

  tasks:

  - name: Test that git ssh connection is working.
    command: ssh -T git@github.com

The question: why does everything work correctly when I manually log in via ssh and run the command but fail when the same command is run as the same user via Ansible?

I will post the answer shortly if no one else beats me to it. Although I am using git to demonstrate the problem, it could occur with any module that depends on ssh agent forwarding. It is not specific to Ansible but I suspect many will first encounter the problem in this scenario.

4条回答
Explosion°爆炸
2楼-- · 2019-01-12 22:48

Another answer to your question (with the exception that I am using Ansible 1.9) could be the following:

You may want to check your /etc/ansible/ansible.cfg (or the other three potential locations where config settings can be overridden) for transport=smart as recommended in the ansible docs. Mine had defaulted to transport=paramiko at some point during a previous install attempt, preventing my control machine from utilizing OpenSSH, and thus agent forwarding. This is probably a massive edge case, but who knows? It could be you!

Though I didn't find it necessary for my configuration, I should note that others have mentioned that you should add -o ForwardAgent=yes to your ssh_args setting in the same file like so:

[ssh_connection]
ssh_args=-o ForwardAgent=yes

I only mention it here for the sake of completeness.

查看更多
放荡不羁爱自由
3楼-- · 2019-01-12 22:59

To expand on @j.freckle's answer, the ansible way to change sudoers file is:

- name: Add ssh agent line to sudoers
  lineinfile: 
    dest: /etc/sudoers
    state: present
    regexp: SSH_AUTH_SOCK
    line: Defaults env_keep += "SSH_AUTH_SOCK"
查看更多
Emotional °昔
4楼-- · 2019-01-12 23:05

There are some very helpful partial answers here, but after running into this issue a number of times, I think an overview would be helpful.

First, you need to make sure that SSH agent forwarding is enabled when connecting from your client running Ansible to the target machine. Even with transport=smart, SSH agent forwarding may not be automatically enabled, depending on your client's SSH configuration. To ensure that it is, you can update your ~/.ansible.cfg to include this section:

[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes

Next, you'll likely have to deal with the fact that become: yes (and become_user: root) will generally disable agent forwarding because the SSH_AUTH_SOCK environment variable is reset. (I find it shocking that Ansible seems to assume that people will SSH as root, since that makes any useful auditing impossible.) There are a few ways to deal with this. As of Ansible 2.2, the easiest approach is to preserve the (whole) environment when using sudo by specifying the -E flag:

become_flags: "-E"

However, this can have unwanted side-effects by preserving variables like PATH. The cleanest approach is to only preserve SSH_AUTH_SOCK by including it in env_keep in your /etc/sudoers file:

Defaults    env_keep += "SSH_AUTH_SOCK"

To do this with Ansible:

- name: enable SSH forwarding for sudo
  lineinfile:
    dest: /etc/sudoers
    insertafter: '^#?\s*Defaults\s+env_keep\b'
    line: 'Defaults    env_keep += "SSH_AUTH_SOCK"'

This playbook task is a little more conservative than some of the others suggested, since it adds this after any other default env_keep settings (or at the end of the file, if none are found), without changing any existing env_keep settings or assuming SSH_AUTH_SOCK is already present.

查看更多
Fickle 薄情
5楼-- · 2019-01-12 23:09

The problem is resolved by removing this line from the playbook:

sudo: yes

When sudo is run on the remote host, the environment variables set by ssh during login are no longer available. In particular, SSH_AUTH_SOCK, which "identifies the path of a UNIX-domain socket used to communicate with the agent" is no longer visible so ssh agent forwarding does not work.

Avoiding sudo when you don't need it is one way to work around the problem. Another way is to ensure that SSH_AUTH_SOCK sticks around during your sudo session by creating a sudoers file:

/etc/sudoers:

     Defaults    env_keep += "SSH_AUTH_SOCK"
查看更多
登录 后发表回答