Suppose you have two services on your topology
- API
- Web Interface
Both suppose to be running on port 80.
On docker swarm when you create a service if you wanna to access it outside the cluster you need to expose and map the port from the service to the nodes (external ports). But if you map port 80 to lets say API service then you cant map the same port for Web Interface service since it will be already mapped.
How can this be solve?
As far as i see this use case is not supported. Even though if you wanna to have a big swarm cluster and through in there all your services and applications will not be possible because this behavior.
I'm missing something?
Any pattern to solve this?
Use different ports if they need to be publicly exposed:
docker service create -p 80:80 --name web nginx
and then
docker service create -p 8080:80 --name api myapi
In the second example, public port 8080 maps to container port 80. Of course if they don't need to be public port exposed, you can see the services between the containers on the same network by using the container name and container port.
curl http://api:80
would find a container named api and connect to port 80 using the DNS discovery for containers on the same network.
You can look into Docker Flow:Proxy to use as a easy-to-configure reverse proxy.
BUT, I believe, as other commentators have pointed out, the Docker 1.12 swarm mode has a fundamental problem with multiple services exposing the same port (like 80 or 8080). It boils down (I THINK) to the mesh-routing magic - which is a level 4 four thing, meaning basically TCP/IP - in other words, IP address + port. So things get messy when multiple services are listing on (for example) port 8080. The mesh router will happily deliver traffic going to port 8080 to any services that exposes the same port.
You CAN isolate things from each other using overlay networking in swarm mode, BUT the problem comes in when you have to connect services to the proxy (overlay network) - at that point it looks like things get mixed up (and this is where I am now having difficulties).
The solution I have at this point is to let the services that need to be exposed to the net use ports unique as far as the proxy-facing (overlay) network is concerned (they do NOT have to be published to the swarm!), and then actually use something like the Docker Flow Proxy to handle incoming traffic on the desired port.
Quick sample to get you I started (roughly based on this:
You then configure the reverseProxy as per it's documentation.
NOTE: I see now there is new AUTO configuration available - I have not yet tried this.
End result if everything worked:
service domain
orservice path
(I had issues withservice path
)[EDIT 2016/10/20]
Ignore all the stuff above about issues with the same exposed port on the same overlay network attached to the proxy.
I tore down my hole setup, and started again - everything is working as expected now: I can access multiple (different) services on port 80, using different domains, via the docker flow proxy.
Also using the auto-configuration mentioned - everything is working like a charm.
If you need to expose both API and Web interface to public, you have two options. Either use different port for the services
or use a proxy that listens port 80 and forwards requests to correct service according to path: