I have few dockers containers running like
- Nginx
- Web app 1
- Web app 2
- PostgreSQL
Since Nginx need to connects web application server inside web app 1 and 2, and web apps need to talk to postgresql, so I have linkages like this
- Nginx --- link ---> Web app 1
- Nginx --- link ---> Web app 2
- Web app 1 --- link ---> PostgreSQL
- Web app 2 --- link ---> PostgreSQL
This works pretty well at beginning, however, when I develop new version of web app 1 and web app 2, I need to replace them. What I do is to remove web app containers, setup new containers and start them.
For web app containers, their IP address at very first would be something like
- 172.17.0.2
- 172.17.0.3
And after I replace them, they have new IP addresses now
- 172.17.0.5
- 172.17.0.6
At this moment, those exposed environment variables in Nginx container are still pointed to old IP addresses. Here comes the problem, how to replace a container without breaking linkage between other containers? The same issue will also happen to PostgreSQL, if I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, this is not a good idea for real life server operation.
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by their name, so if you create a new network, any container connected to that network can reach other containers by their name. Example:
1) Create new network
2) Connect containers to network
or
3) Ping container by name
See this section of the documentation;
Note: Unlike legacy
links
the new networking will not create environment variables, nor share environment variables with other containers.This feature currently doesn't support aliases
with OpenSVC approach, you can workaround by :
each time you replace a container, you are sure that it will connect to the correct ip address.
Tutorial here => Docker Multi Containers with OpenSVC
don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)
the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
You may use dockerlinks with names to solve this.
Most basic setup would be to first create a named database container :
then create a web container connecting to db :
With this, you don't need to manually connect containers with their IP adresses.
Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.
It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts
Use it like this:
docker run --net=some_user_definied_nw --net-alias postgres ...
and you can connect to it using that alias from any container on the same network.Does not work on the default network, unfortunately, you have to create one with
docker network create <network>
and then use it with--net=<network>
for every container (compose supports it as well).In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.
It's all not very well documented as of yet, hard to figure out just by reading the man page.
This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md
You should be able to get a dynamic link in place by running a docker container with the
--publish-service <name>
arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)