I am developing a service and using there docker compose to spin services like postgres, redis, elasticsearch. I have a web application that is based on RubyOnRails and writes and reads from all those services.
Here is my docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
- frontapp
elasticsearch:
image: elasticsearch:2.2
networks:
- frontapp
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
- frontapp
networks:
frontapp:
driver: bridge
And i can ping containers within this network
$ docker-compose run redis /bin/bash
root@777501e06c03:/data# ping postgres
PING postgres (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.346 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.047 ms
...
So far so good. Now I want to run ruby on rails application on my host machine but be able to access postgres instance with url like postgresql://username:password@postgres/database
currently that is not possible
$ ping postgres
ping: unknown host postgres
I can see my network in docker
$ docker network ls
NETWORK ID NAME DRIVER
ac394b85ce09 bridge bridge
0189d7e86b33 elephant_default bridge
7e00c70bde3b elephant_frontapp bridge
a648554a72fa host host
4ad9f0f41b36 none null
And I can see an interface to it
$ ifconfig
br-0189d7e86b33 Link encap:Ethernet HWaddr 02:42:76:72:bb:c2
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:76ff:fe72:bbc2/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2000 (2.0 KB) TX bytes:8792 (8.7 KB)
br-7e00c70bde3b Link encap:Ethernet HWaddr 02:42:e7:d1:fe:29
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:e7ff:fed1:fe29/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1584 errors:0 dropped:0 overruns:0 frame:0
TX packets:1597 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:407137 (407.1 KB) TX bytes:292299 (292.2 KB)
...
But i am not sure what should I do next. I tried to play a bit with /etc/resolv.conf
, mainly with nameserver
directive, but that had no effect.
I would appreciate any help of suggestions how to configure this setup correctly.
UPDATE
After browsing through Internet resources I managed to assign static IP addresses to boxes. For now it is enough for me to continue development. Here is my current docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
frontapp:
ipv4_address: 172.25.0.11
elasticsearch:
image: elasticsearch:2.2
networks:
frontapp:
ipv4_address: 172.25.0.12
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
frontapp:
ipv4_address: 172.25.0.10
networks:
frontapp:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/16
gateway: 172.25.0.1
There is a opensource application that solves this issue, it's called DNS Proxy Server
It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well
Start DNS Server
It will be set automatically as your default DNS (and recover to the original when it stops)
Creating some containers for test
checking docker-compose file
starting containers
Solving containers
from host
from another container
As well it solves internet hostnames
There are two solutions (except
/etc/hosts
) described here and hereI wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter
--hostsdir=/var/run/docker-hosts
and updates file/var/run/docker-hosts/hosts
each time a list of running containers was changed. Once file/var/run/docker-hosts/hosts
is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
hostname of the docker container cannot be seen from outside. What you can do is to assign a name to container and access the container through the name. If you link 2 containers say container1 and container2 then docker takes care of writing the IP and the hostname of container2 in the container1. However in your case your application is running in the hostmachine.
OR
You know the IP of the container. So in your host machine's /etc/hosts you can add $IP $hostanameof container
If you're only using you docker-compose setup locally you could map the ports from your containers to your host with
Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.
A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.
There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.
Aditya is correct. In your case the simplest is to hard code your hostname / IP maping in
/etc/hosts
The problem with this approach, however, is that you do not control the private IP address your postgres machine will have. IP address will change every time you start a new container, and so you will need to update your /etc/hosts file.
If that's an issue, I would recommend that you read this blog post that explains how to enforce that a container get a specific IP address:
https://xand.es/2016/05/09/docker-with-known-ip/