可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
before posting my issue, I would like to know if it is even possible to achieve what I want.
I have, lets say, myserver.com running a docker container with nginx & letsencrypt. On the same server are 2 more docker containers running websites.
For now all is redirected fine, so www.myserver.com goes to docker 1 and site2.myserver.com goes to docker 2.
I would like to have all communication running over HTTPS, but here starts the trouble.
So, my question is: is it possible for the docker with nginx and letsencrypt to connect to another docker using the certificates from letsencrypt?
To me it seems to be some kind of man-in-the-middle "attack".
A bit more schematic:
Browse to http:// site2.myserver.com -> nginx redirects to https:// site2.myserver.com -> connect to container 2 (192.168.0.10) on port 80.
Or another option:
Browse to http:// site2.myserver.com -> nginx redirects to https:// site2.myserver.com -> connect to container 2 (192.168.0.10) on port 443 having the site2.myserver.com certificates.
If it can't be done, what is the solution then? Copying the certificates to the docker containers and make them run https, so that a http request gets redirected to the https port of that container?
Browse to http:// site2.myserver.com -> nginx forwards request -> connect to container 2 (192.168.0.10) on port 443 having the site2.myserver.com certificates.
Thanks,
Greggy
回答1:
As I understand it your nginx reverse proxy is on the same network as the containers, so there is not much need to secure the connection between them with TLS (as this is a private network and if an attacker has access to that network he would have access to the server, too, and all the unencrypted data).
If you absolutely want valid certificates to secure the connections on your local network you could create additional sub-domains that resolve to the local IPs. Then you will need to use the manual DNS option to get your certificate (this is a certbot option where you need to manually enter a key as a TXT entry for your domain).
Example Nginx configuration to redirect http to https
server {
listen 80;
server_name example.com;
return 301 https://example.com/;
}
server{
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
location / {
proxy_pass http://container:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
include tls.conf;
}
回答2:
I would go with the out of the box solution:
JWilder Nginx + Lets Encrypt.
First we start NGINX Container as Reverse Proxy:
docker run -d -p 80:80 -p 443:443 \
--name nginx-proxy \
-v /path/to/certs:/etc/nginx/certs:ro \
-v /etc/nginx/vhost.d \
-v /usr/share/nginx/html \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Next we start the Lets Encrypt Container:
docker run -d \
-v /path/to/certs:/etc/nginx/certs:rw \
--volumes-from nginx-proxy \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
jrcs/letsencrypt-nginx-proxy-companion
For your Websites you need some Environment variables to be set:
docker run -d \
--name website1 \
-e "VIRTUAL_HOST=website1.com" \
-e "LETSENCRYPT_HOST=website1.com" \
-e "LETSENCRYPT_EMAIL=webmaster@website1" \
tutum/apache-php
The Nginx container will create a new entry in his config, and the lets encrypt container will request a certificate (and does the renew stuff).
More: Nginx+LetsEncrypt
回答3:
Here is my way to do that:
NGINX Config file (default.conf)
Using the docker image from https://github.com/KyleAMathews/docker-nginx, I did the custom default file as follows:
server {
root /var/www;
index index.html index.htm;
server_name localhost MYHOST.COM;
# Add 1 week expires header for static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1w;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to redirecting to index.html
try_files $uri $uri/ @root;
return 301 https://$host$request_uri;
}
# If nginx can't find a file, fallback to the homepage.
location @root {
rewrite .* / redirect;
}
include /etc/nginx/basic.conf;
}
Dockerfile
Here is my Dockerfile, considering that my static content is under html/ directory.
COPY conf/default.conf /etc/nginx/sites-enabled/default
ADD certs/myhost.com.crt /etc/nginx/ssl/server.crt
ADD certs/myhost.com.key /etc/nginx/ssl/server.key
RUN ln -s /etc/nginx/sites-available/default-ssl /etc/nginx/sites-enabled/default-ssl
COPY html/ /var/www
CMD 'nginx'
Testing
For local test, change the file /etc/hosts by adding myhost.com to 127.0.0.1 and run the following command:
curl -I http://www.myhost.com/
Result
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sun, 04 Mar 2018 04:32:04 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://www.myhost.com/
X-UA-Compatible: IE=Edge
回答4:
Good, I could finally get what I wanted by merging the answers of opHASnoNAME and Paul Trehiou. What I did as an extra for opHASnoNAME's answer is to mount a filesystem between the nginx and the letsencrypt docker. It makes it possible to link the config files of nginx to the right certificates (see later).
This is what I did:
docker run --name nginx-prod --restart always -d -p 80:80 -p 443:443 -v /choose/your/dir/letsencrypt:/etc/nginx/certs:ro -v /etc/nginx/vhost.d -v /usr/share/nginx/html -v /var/run/docker.sock:/tmp/docker.sock:ro -e DEFAULT_HOST=myserver.com jwilder/nginx-proxy
docker run --name letsencrypt --restart always -d -v /choose/your/dir/letsencrypt:/etc/nginx/certs:rw --volumes-from nginx-prod -v /var/run/docker.sock:/var/run/docker.sock:ro jrcs/letsencrypt-nginx-proxy-companion
Then run whatever container with a webserver; no need to set the LETSENCRYPT variables. My current containers can be reached without setting them.
The jwilder/nginx-proxy will list up all the running containers in /etc/nginx/conf.d/default.conf. Don't add anything into this file because it will be overwritten on the next restart.
Create for each webserver a new .conf file in the same directory. This file will contain the https information as suggested by Paul Trehiou. I created for example site2.conf:
server{
listen 443 ssl http2;
server_name site2.myserver.com;
ssl_certificate /etc/nginx/certs/live/myserver.com/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/live/myserver.com/privkey.pem;
ssl_trusted_certificate /etc/nginx/certs/live/myserver.com/fullchain.pem;
location / {
proxy_pass http://192.168.0.10/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The proxy_pass address is something you can take from the default.conf file, the IP addresses are listed there for each container.
To be able to back up those .conf files I will recreate my nginx container and mount a local filesystem to /etc/nginx/conf.d. It will make life also easier if the container doesn't start up because of an error in the .conf files.
Thanks a lot everybody for your input, the puzzle is complete now ;-)