Question:
I'm deploying on docker 1.9
I run an nginx container first:
docker run \
--name=nginx \
--link=php1:php1 \
--link=php2:php2 \
--restart=always \
-p 80:80 -p 443:443 \
-v /var/docker/nginx/conf.d:/etc/nginx/conf.d \
-d nginx:latest
Note that I linked two exist php containers: php1
and php2
.
Now I started a new container php3
.
Can I add a link the nginx
to php3
with a single bash script on the host machine?
Why I want to do this?
Because I have many different php apps to deploy, and each instance I will create a new php:fpm docker container to run the code.
And they share the same nginx
service.
I want to make a dynamic deploy bash script for the php app, to make one-key deployment.
But I was blocked that the nginx doesn't know the newly created php host name.
Adding link dynamically was part of issue 3155.
Pre-libnetwork (1.9 and mainly soon 1.10), you could use jwilder/nginx-proxy in order to generate the right nginx.conf
and restart the nginx service each time a new container needs to be linked.
But with libnetwork (and its 0.6 release early february), docker 1.10 comes with new Docker networks features.
That means you will be able to attach your containers to a user defined network, making them automatically visible from each others by name!
$ docker run -itd --name=container2 busybox
498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
Then create an isolated, bridge network to test with.
$ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
$ docker run -itd --name=container2 busybox
498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
$ docker network connect isolated_nw container2
$ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
The selected IP address is part of the container networking configuration and will be preserved across container reload.
The feature is only available on user defined networks, because they guarantee their subnets configuration does not change across daemon reload.
On the isolated_nw
which was user defined, the Docker embedded DNS server enables name resolution for other containers in the network. Inside of container2
it is possible to ping container3
by name
What it means is, with docker 1.10, containers attached to a user-defined network can see each other by names.
But there is more: Linking containers in user-defined networks
$ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox
01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
With the help of --link
, container4
will be able to reach container5
using the aliased name c5
as well.
Please note that while creating container4
, we linked to a container named container5
which is not created yet.
That is one of the differences in behavior between the legacy link in default bridge network and the new link functionality in user defined networks.
- The legacy link is static in nature and it hard-binds the container with the alias and it doesn't tolerate linked container restarts.
- While the new link functionality in user defined networks are dynamic in nature and supports linked container restarts including tolerating ip-address changes on the linked container.
You will have in the new docker 1.10 have two choices:
either have a fixed NGiNX config making reverse-proxy to a fixed list of potentially not created yet containers c1
, c2
, c3
...
Each time a new container is created, you relaunch your NGiNX container with the appropriate link: --link c2:myNewContainer2
Or your NGiNX container main process actually monitors the user defined network, and for each new container detected, regenerate the NGiNX conf, and gracefully restart the NGiNX daemon.