How to setup linkage between docker containers so

2019-01-10 01:30发布

I have few dockers containers running like

  • Nginx
  • Web app 1
  • Web app 2
  • PostgreSQL

Since Nginx need to connects web application server inside web app 1 and 2, and web apps need to talk to postgresql, so I have linkages like this

  • Nginx --- link ---> Web app 1
  • Nginx --- link ---> Web app 2
  • Web app 1 --- link ---> PostgreSQL
  • Web app 2 --- link ---> PostgreSQL

This works pretty well at beginning, however, when I develop new version of web app 1 and web app 2, I need to replace them. What I do is to remove web app containers, setup new containers and start them.

For web app containers, their IP address at very first would be something like

  • 172.17.0.2
  • 172.17.0.3

And after I replace them, they have new IP addresses now

  • 172.17.0.5
  • 172.17.0.6

At this moment, those exposed environment variables in Nginx container are still pointed to old IP addresses. Here comes the problem, how to replace a container without breaking linkage between other containers? The same issue will also happen to PostgreSQL, if I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, this is not a good idea for real life server operation.

11条回答
干净又极端
2楼-- · 2019-01-10 02:18

You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.

This has its drawbacks as well, but it might work in your case.

查看更多
狗以群分
3楼-- · 2019-01-10 02:22

The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).

We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).

1) Use dynamic DNS

The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.

We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).

An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.

2) Use the docker bridge ip

When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.

When replacing a container with a new version, just make the new container publish the same port on the same IP.

This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.

查看更多
趁早两清
4楼-- · 2019-01-10 02:22

You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:

postgres volume:

$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true

postgres-container:

$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql

ambassador-container for postgres:

$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador

Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):

$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root@b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root@b94251eac8be:/# echo $PGHOST
172.17.42.1
root@b94251eac8be:/#
root@b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres: 
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.

postgres=#
postgres=# select 6*7 as answer;
 answer 
--------
     42
(1 row)

bpostgres=# 

Now you can restart the ambassador container whithout having to restart the client.

查看更多
对你真心纯属浪费
5楼-- · 2019-01-10 02:23

Another alternative is to use the --net container:$CONTAINER_ID option.

Step 1: Create "network" containers

docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity

Step 2: Inject services into "network" containers

docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx

As long as you do not touch the "network" containers, the IP addresses of your links should not change.

查看更多
爷的心禁止访问
6楼-- · 2019-01-10 02:24

You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info

查看更多
登录 后发表回答