how to define ssh private key for servers fetched

2019-01-30 11:09发布

问题:

I met one configuration problem when coding ansible playbook for ssh private key file.

As we know, we can define combination with host server, ip & related ssh private key in ansible hosts file for static inventory servers.

But i have no idea how to define that with dynamic inventory servers.

Ex:

---
- hosts: tag_Name_server1
  gather_facts: no
  roles:
    - role1

- hosts: tag_Name_server2
  gather_facts: no
  roles:
    - roles2

Below command is used to call that playbook:

ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem

My question is:

  1. How can i define ~/.ssh/SSHKEY.pem in ansible files rather then in command line
  2. I guess there might be one parameter in playbook like "gather_facts" to define which private key should be used for above hosts, but there looks like no such parameter?
  3. If there is no way to define private key in files, what should be defined in command line if different key file will be used for different dynamic fetched servers for one playbook?

Thank you for your comments

回答1:

TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.


Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.

This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.

The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.

Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:

my-playbooks
|-- test.yml
+-- group_vars
     |-- tag_Name_server1.yml
     +-- tag_Name_server2.yml

Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.

Within each group var file, we can specify the key file to use for connecting to hosts in the group:

# tag_Name_server1.yml
# --------------------
# 
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem

Now, when you run your playbook, it should automatically pick up the right keys!


Using environment vars for portability

I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.

In this case, my group vars files look like this, instead:

# tag_Name_server1.yml
# --------------------
# 
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"

Further Improvements

There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.



回答2:

The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):

[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa

Though, it still sets private key globally for all hosts in playbook.

Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.



回答3:

You can simply define the key to use directly when running the command:

ansible-playbook \
        \ # Super verbose output incl. SSH-Details:
    -vvvv \
        \ # The Server to target: (Keep the trailing comma!)
    -i "000.000.0.000," \
        \ # Define the key to use:
    --private-key=~/.ssh/id_rsa_ansible \
        \ # The `env` var is needed if `python` is not available:
    -e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
        \ # Dry–Run:
    --check \
    deploy.yml

Copy/ Paste:

ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml


回答4:

I'm using the following configuration:

#site.yml:
- name: Example play
  hosts: all
  remote_user: ansible
  become: yes
  become_method: sudo
  vars:
    ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"


回答5:

I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.

The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):

[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem

Here is the patch for ec2.py:

204a205,206
>     'ssh_key_path': '~/.ssh',
>     'ssh_key_suffix': '.pem',
422a425,428
>         # SSH key setup
>         self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
>         self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
> 
1490a1497
>         instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)