Docker nginx-proxy : proxy between containers

2020-03-24 07:18发布

问题:

I am currently running a development stack using Docker-Compose in my company, to provide to developers everything they need to code our applications.

It includes in particular:

  • a Gitlab container (sameersbn/gitlab) to manage private GIT repositories,
  • a Jenkins container (library/jenkins) for building and continuous integration,
  • an Archiva container (ninjaben/archiva-docker) to manage Maven repositories.

In order to secure the services through HTTPS, and exposing them to the outside world, I installed the excellent nginx-proxy container (jwilder/nginx-proxy) which allows automated nginx proxy configuration using environment variables on containers, and automated HTTP to HTTPS redirection.

DNS are configured to map each public URL of dockerized services to the IP of the host.

Finally, using Docker-Compose, my docker-compose.yml file looks like this :

version: '2'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
    - "80:80"
    - "443:443"
    volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro
    - /var/config/nginx-proxy/certs:/etc/nginx/certs:ro
  postgresql:
    # Configuration of postgresql container ...
  gitlab:
    image: sameersbn/gitlab
    ports:
    - "10022:22"
    volumes:
    - /var/data/gitlab:/home/git/data
    environment:
    # Bunch of environment variables ...
    - VIRTUAL_HOST=gitlab.my-domain.com
    - VIRTUAL_PORT=80
    - CERT_NAME=star.my-domain.com
  archiva:
    image: ninjaben/archiva-docker
    volumes:
    - /var/data/archiva:/var/archiva
    environment:
    - VIRTUAL_HOST=archiva.my-domain.com
    - VIRTUAL_PORT=8080
    - CERT_NAME=star.my-domain.com
  jenkins:
    image: jenkins
    volumes:
    - /var/data/jenkins:/var/jenkins_home
    environment:
    - VIRTUAL_HOST=jenkins.my-domain.com
    - VIRTUAL_PORT=8080
    - CERT_NAME=star.my-domain.com

For a developer workstation, everything works as expected. One can access the difference services through https://gitlab.my-domain.com, https://repo.my-domain.com and https://jenkins.my-domain.com.

The problem occurs when one of the dockerized service access another dockerized service. For instance, If I try to access https://archiva.my-domain.com from jenkins docker, I will get a timeout error from the proxy.

It seems that even if archiva.my-domain.com is resolved as the public host IP from the docker container, requests coming from dockerized services are not proxied by nginx-proxy.

As far as I understood, docker-nginx is handling requests coming from the host network, but does not care about the ones coming from the internal container network (_dockerconfig_default_ for a Docker-Compose stack).

You could say, why would I need to use the proxy from a container ? Of course, I could use URL http://archiva:8080 from Jenkins container, and it would work. But this kind of configuration is not scalable.

For example, using a Gradle build to compile one application, the build.gradle needs to declare my private repository through https://archiva.my-domain.com. It will work if build is launched from a developer workstation, but not through the jenkins container ...

Another example is an authentication in Jenkins by OAuth GitLab service, where the same URL GitLab authentication needs to be both available from the outside, and inside the Jenkins container.

My question here is then : How to configure nginx-proxy to proxy a request from a container to another container ?

I did not see any topic discussing this problem, and I do not understand enough the problem to build a solution on nginx configuration.

Any help would be really appreciated.

回答1:

BMitch, the odds were good, it was indeed a iptables rules problem, and not a misconfiguration of nginx-proxy.

The default policy of chain INPUT for the table filter was DROP, and no rules was made to ACCEPT requests from the container IPs (127.20.X.X).

So for the record, I give some details of the situation if other people face the same problem.

To access containers from the outside world, Docker put rules on PREROUTING and FORWARD rules to allow external IPs to be DNATed from the host IP to the container IPs. Theses default rules allow any external IPs, and that is why limiting access to containers requires some advanced iptables customizations.

See this link for an example : http://rudijs.github.io/2015-07/docker-restricting-container-access-with-iptables/

But if your containers need to access host resources (services runing on the host, or in my case, a nginx-proxy container listening to HTTP/HTTPS host port and proxying to containers), you need to take care about the iptables rules of the INPUT chain.

In fact, a request coming from the container and addressed to the host will be routed to the host network stack by the Docker daemon, but will then need to pass the INPUT chain (as the request src IP is the host one). So if you want to protect host resources and let containers access them, do not remember to add something like this :

iptables -A INPUT -s 127.20.X.X/24 -j ACCEPT

Where 127.20.X.X/24 is the virtual network on which your containers are running.

Thank you a lot for your help.