Update II
It's now July 16th, 2015 and things have changed again. I've discovered this automagical container from Jason Wilder:
https://github.com/jwilder/nginx-proxy
and it solves this problem in about as long as it takes todocker run
the container. This is now the solution I'm using to solve this problem.
Update
It's now July of 2015 and things have change drastically with regards to networking Docker containers. There are now many different offerings that solve this problem (in a variety of ways).
You should use this post to gain a basic understanding of the
docker --link
approach to service discovery, which is about as basic as it gets, works very well, and actually requires less fancy-dancing than most of the other solutions. It is limited in that it's quite difficult to network containers on separate hosts in any given cluster, and containers cannot be restarted once networked, but does offer a quick and relatively easy way to network containers on the same host. It's a good way to get an idea of what the software you'll likely be using to solve this problem is actually doing under the hood.Additionally, you'll probably want to also check out Docker's nascent
network
, Hashicorp'sconsul
, Weaveworksweave
, Jeff Lindsay'sprogrium/consul
&gliderlabs/registrator
, and Google'sKubernetes
.There's also the CoreOS offerings that utilize
etcd
,fleet
, andflannel
.And if you really want to have a party you can spin up a cluster to run
Mesosphere
, orDeis
, orFlynn
.If you're new to networking (like me) then you should get out your reading glasses, pop "Paint The Sky With Stars — The Best of Enya" on the Wi-Hi-Fi, and crack a beer — it's going to be a while before you really understand exactly what it is you're trying to do. Hint: You're trying to implement a
Service Discovery Layer
in yourCluster Control Plane
. It's a very nice way to spend a Saturday night.It's a lot of fun, but I wish I'd taken the time to educate myself better about networking in general before diving right in. I eventually found a couple posts from the benevolent Digital Ocean Tutorial gods:
Introduction to Networking Terminology
andUnderstanding ... Networking
. I suggest reading those a few times first before diving in.Have fun!
Original Post
I can't seem to grasp port mapping for Docker
containers. Specifically how to pass requests from Nginx to another container, listening on another port, on the same server.
I've got a Dockerfile for an Nginx container like so:
FROM ubuntu:14.04
MAINTAINER Me <me@myapp.com>
RUN apt-get update && apt-get install -y htop git nginx
ADD sites-enabled/api.myapp.com /etc/nginx/sites-enabled/api.myapp.com
ADD sites-enabled/app.myapp.com /etc/nginx/sites-enabled/app.myapp.com
ADD nginx.conf /etc/nginx/nginx.conf
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80 443
CMD ["service", "nginx", "start"]
And then the api.myapp.com
config file looks like so:
upstream api_upstream{
server 0.0.0.0:3333;
}
server {
listen 80;
server_name api.myapp.com;
return 301 https://api.myapp.com/$request_uri;
}
server {
listen 443;
server_name api.mypp.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://api_upstream;
}
}
And then another for app.myapp.com
as well.
And then I run:
sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername/nginx
And it all stands up just fine, but the requests are not getting passed-through to the other containers/ports. And when I ssh into the Nginx container and inspect the logs I see no errors.
Any help?
@T0xicCode's answer is correct, but I thought I would expand on the details since it actually took me about 20 hours to finally get a working solution implemented.
If you're looking to run Nginx in its own container and use it as a reverse proxy to load balance multiple applications on the same server instance then the steps you need to follow are as such:
Link Your Containers
When you
docker run
your containers, typically by inputting a shell script intoUser Data
, you can declare links to any other running containers. This means that you need to start your containers up in order and only the latter containers can link to the former ones. Like so:So in this example, the
API
container isn't linked to any others, but theApp
container is linked toAPI
andNginx
is linked to bothAPI
andApp
.The result of this is changes to the
env
vars and the/etc/hosts
files that reside within theAPI
andApp
containers. The results look like so:/etc/hosts
Running
cat /etc/hosts
within yourNginx
container will produce the following:ENV Vars
Running
env
within yourNginx
container will produce the following:I've truncated many of the actual vars, but the above are the key values you need to proxy traffic to your containers.
To obtain a shell to run the above commands within a running container, use the following:
sudo docker exec -i -t Nginx bash
You can see that you now have both
/etc/hosts
file entries andenv
vars that contain the local IP address for any of the containers that were linked. So far as I can tell, this is all that happens when you run containers with link options declared. But you can now use this information to configurenginx
within yourNginx
container.Configuring Nginx
This is where it gets a little tricky, and there's a couple of options. You can choose to configure your sites to point to an entry in the
/etc/hosts
file thatdocker
created, or you can utilize theENV
vars and run a string replacement (I usedsed
) on yournginx.conf
and any other conf files that may be in your/etc/nginx/sites-enabled
folder to insert the IP values.OPTION A: Configure Nginx Using ENV Vars
The key difference between this option and using the
/etc/hosts
file option is how you write yourDockerfile
to use a shell script as theCMD
argument, which in turn handles the string replacement to copy the IP values fromENV
to your conf file(s).Here's the set of configuration files I ended up with:
Dockerfile
nginx.conf
api.myapp.conf
Nginx-Startup.sh
I'll leave it up to you to do your homework about most of the contents of
nginx.conf
andapi.myapp.conf
.The magic happens in
Nginx-Startup.sh
where we usesed
to do string replacement on theAPP_IP
placeholder that we've written into theupstream
block of ourapi.myapp.conf
andapp.myapp.conf
files.This ask.ubuntu.com question explains it very nicely: Find and replace text within a file using commands
So docker has launched our container and triggered the
Nginx-Startup.sh
script to run, which has usedsed
to change the valueAPP_IP
to the correspondingENV
variable we provided in thesed
command. We now have conf files within our/etc/nginx/sites-enabled
directory that have the IP addresses from theENV
vars that docker set when starting up the container. Within yourapi.myapp.conf
file you'll see theupstream
block has changed to this:The IP address you see may be different, but I've noticed that it's usually
172.0.0.x
.You should now have everything routing appropriately.
OPTION B: Use
/etc/hosts
File EntriesThis should be the quicker, easier way of doing this, but I couldn't get it to work. Ostensibly you just input the value of the
/etc/hosts
entry into yourapi.myapp.conf
andapp.myapp.conf
files, but I couldn't get this method to work.Here's the attempt that I made in
api.myapp.conf
:Considering that there's an entry in my
/etc/hosts
file like so:172.0.0.2 API
I figured it would just pull in the value, but it doesn't seem to be.I also had a couple of ancillary issues with my
Elastic Load Balancer
sourcing from all AZ's so that may have been the issue when I tried this route. Instead I had to learn how to handle replacing strings in Linux, so that was fun. I'll give this a try in a while and see how it goes.AJB's "Option B" can be made to work by using the base Ubuntu image and setting up nginx on your own. (It didn't work when I used the Nginx image from Docker Hub.)
Here is the Docker file I used:
My nginx config (aka: conf/mysite.com):
And finally, how I start my containers:
This got me up and running so my nginx pointed the upstream to the second docker container which exposed port 3000.
Using docker links, you can link the upstream container to the nginx container. An added feature is that docker manages the host file, which means you'll be able to refer to the linked container using a name rather than the potentially random ip.
I tried using the popular Jason Wilder reverse proxy that code-magically works for everyone, and learned that it doesn't work for everyone (ie: me). And I'm brand new to NGINX, and didn't like that I didn't understand the technologies I was trying to use.
Wanted to add my 2 cents, because the discussion above around
linking
containers together is now dated since it is a deprecated feature. So here's an explanation on how to do it usingnetworks
. This answer is a full example of setting up nginx as a reverse proxy to a statically paged website usingDocker Compose
and nginx configuration.TL;DR;
Add the services that need to talk to each other onto a predefined network. For a step-by-step discussion on Docker networks, I learned some things here: https://technologyconversations.com/2016/04/25/docker-networking-and-dns-the-good-the-bad-and-the-ugly/
Define the Network
First of all, we need a network upon which all your backend services can talk on. I called mine
web
but it can be whatever you want.Build the App
We'll just do a simple website app. The website is a simple index.html page being served by an nginx container. The content is a mounted volume to the host under a folder
content
DockerFile:
default.conf
docker-compose.yml
Note that we no longer need port mapping here. We simple expose port 80. This is handy for avoiding port collisions.
Run the App
Fire this website up with
Some fun checks regarding the dns mappings for your container:
This ping should work, inside your container.
Build the Proxy
Nginx Reverse Proxy:
Dockerfile
We reset all the virtual host config, since we're going to customize it.
docker-compose.yml
Run the Proxy
Fire up the proxy using our trusty
Assuming no issues, then you have two containers running that can talk to each other using their names. Let's test it.
Set up Virtual Host
Last detail is to set up the virtual hosting file so the proxy can direct traffic based on however you want to set up your matching:
sample-site.conf for our virtual hosting config:
Based on how the proxy was set up, you'll need this file stored under your local
conf.d
folder which we mounted via thevolumes
declaration in thedocker-compose
file.Last but not least, tell nginx to reload it's config.
These sequence of steps is the culmination of hours of pounding head-aches as I struggled with the ever painful 502 Bad Gateway error, and learning nginx for the first time, since most of my experience was with Apache.
This answer is to demonstrate how to kill the 502 Bad Gateway error that results from containers not being able to talk to one another.
I hope this answer saves someone out there hours of pain, since getting containers to talk to each other was really hard to figure out for some reason, despite it being what I expected to be an obvious use-case. But then again, me dumb. And please let me know how I can improve this approach.
@gdbj's answer is a great explanation and the most up to date answer. Here's however a simpler approach.
So if you want to redirect all traffic from nginx listening to
80
to another container exposing8080
, minimum configuration can be as little as:nginx.conf:
docker-compose.yml
Docker docs
Just found an article from Anand Mani Sankar wich shows a simple way of using nginx upstream proxy with docker composer.
Basically one must configure the instance linking and ports at the docker-compose file and update upstream at nginx.conf accordingly.