On my Mac I use vagrant with Ubuntu and apache running on it, and have virtual host entries for my various code repositories for apache. In the OSX side of things I create /etc/hosts entries for each of those v-host entries.
I'm trying to achieve the same effect with docker, but I'm struggling to figure it out without having to specify the port number when accessing the app, which I don't want to do. Ex: I have 127.0.0.1 dockertest.com
in my /etc/hosts, which I can then access at http://dockertest.com:8080
. I'd like to be able to just go to http://dockertest.com
without specifying the port. How can I achieve this? I know port numbers can't be used in the /etc/hosts file, so I'm looking for a way that would mimic the effect if it was possible. I need to be able to run multiple docker apps at the same time because some of the codebases communicate with one another and each need to have their own unique hostname, so I don't think simply setting ports to 80:80
in the docker-compose file will work because every app will be (attempting) to run on 127.0.0.1:80
.
For context I've followed this tutorial for running apache, php and mysql on docker. All of my files are exactly as shown on that site.
Update
I'm getting a 502 Bad Gateway
nginx error with the following docker-compose.yml
file.
version: "3.3"
services:
php:
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
volumes:
- ./public_html/:/var/www/html/
environment:
- VIRTUAL_PORT=3000
- VIRTUAL_HOST=dockertest.com
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
frontend:
backend:
Update 2
Resolved the '502 Bad Gateway' error, here's the updated docker-compose.yml
file. I had to add the nginx-proxy to one of the networks I referenced. My question isn't completely resolved, but I do have part of it working. For anyone reading this looking for a solution, I created another question here to prevent this one from getting too long.
version: "3.3"
services:
php:
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
volumes:
- ./public_html/:/var/www/html/
environment:
- VIRTUAL_HOST=dockertest.com
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
nginx-proxy:
image: jwilder/nginx-proxy
networks:
- backend
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
frontend:
backend:
One possibility is to set up all the applications in their separate containers and then connecting them via a docker network.
And in order to reach all of the containers I would suggest adding an nginx webserver container to the network as a reverse proxy, which you can then bind to port 80 of your machine.
You can then either define a
location
for every app separately ot define one generic location likeThis configuration either has to be placed inline in the original
/etc/nginx/nginx.conf
or in a separate file that is included inside thehttp
config block.After restarting the nginx service or container (depending on container setup) you should be able to reach all the services inside the docker network and all of the services should be able to communicate to each other without problems.
Of course you still have to keep the entries in the hosts file, so your computer knows that it has to process the request locally.
Edit:
The original config (probably) does not do what it is supposed to do. So, I came up with a newer version, which should get the job done:
With this configuration, the nginx server will be listening to all incoming requests on port 80 and forward them to the proper container inside the network. You also do not have to configure the host resolution yourself, as docker container names also represent the host(-name) of a container.
Hopefully this works out for you.
You can use a jwilder/nginx-proxy, it's a reverse proxy auto-configured by the env vars of other containers, so you don't need to manually write nginx proxy configs. Also as requested, it allows to use specific port to forward requests to.
jwilder/nginx-proxy
has many more nice features like ssl, uwsgi, fastcgi and can also be used in production. There are also "companion" additions like let's encrypt ssl and man in the middle proxy.It looks like your apache server runs on port 80 inside the container. If you want use
dockertest.com
outside with your /etc/hosts entry, than u have to use port 80 for outside also./etc/hosts
entry for the dockertest.com domain-p 80:80
or if you use docker-compose