I am trying to create an Ansible playbook which would setup a local Postgres database to be used for local/dev testing of a certain app.
postgresql_db
seems to be the Ansible module I need, and psycopg2
is the Python module listed as a dependency.
So in the same virtualenv where I have installed Ansible, I also installed psycopg2 (I'm running on Mac with pipenv).
But when I run my playbook with this command:
ansible-playbook pg_local.yml --connection=local
I get the error:
"msg": "the python psycopg2 module is required"
The playbook is tiny:
---
- hosts: my_ws
tasks:
- name: create a db
postgresql_db:
name: mynewdb
and so is /etc/ansible/hosts
:
[my_ws]
localhost
I suspect that somehow the "remote" machine, which is really local, is trying to import psycopg2, which running in a python environment which doesn't have the module. Is the --connection=local
to blame?
I have added it to solve the ssh: connect to host localhost port 22: Connection refused
error, and as I do intend to run this only locally. I don't think it's wrong - but I do wonder if it messes up the environment for Ansible.
I have added a 'test and report' task to the playbook and no problems were detected:
changed: [localhost] => {
"changed": true,
"cmd": [
"python",
"-c",
"import psycopg2; print(psycopg2.__version__)"
],
"delta": "0:00:00.087664",
"end": "2018-08-22 13:36:17.046624",
"invocation": {
"module_args": {
"_raw_params": "python -c 'import psycopg2; print(psycopg2.__version__)'",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"rc": 0,
"start": "2018-08-22 13:36:16.958960",
"stderr": "/Users/lsh783/.local/share/virtualenvs/docker-ansible-setup--HsJmUMv/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")",
"stderr_lines": [
"/Users/lsh783/.local/share/virtualenvs/docker-ansible-setup--HsJmUMv/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.",
" \"\"\")"
],
"stdout": "2.7.5 (dt dec pq3 ext lo64)",
"stdout_lines": [
"2.7.5 (dt dec pq3 ext lo64)"
]
}
I do see this line in the output with -vvv
:
<localhost> EXEC /bin/sh -c '/usr/bin/python /Users/lsh783/.ansible/tmp/ansible-tmp-1534957231.272713-121756549613883/postgresql_db.py > && sleep 0'
and it bothers me that it's not the Python inside the virtual environment I'm under.
This is caused by a well-known and described behaviour of Ansible.
In short, if you specify
localhost
wherever in your inventory, Ansible will default to using/usr/bin/python
for running the modules regardless of theconnection: local
setting.This in turn will cause problems if additional libraries were installed in a Python environment used to execute a playbook, but not for the
/usr/bin/python
.The solution is to specify
ansible_python_interpreter
for thelocalhost
. In your case:Because of the above, the test to verify module presence should be: