Using /etc/hosts with docker

2019-07-27 09:57发布

On my Mac I use vagrant with Ubuntu and apache running on it, and have virtual host entries for my various code repositories for apache. In the OSX side of things I create /etc/hosts entries for each of those v-host entries.

I'm trying to achieve the same effect with docker, but I'm struggling to figure it out without having to specify the port number when accessing the app, which I don't want to do. Ex: I have 127.0.0.1 dockertest.com in my /etc/hosts, which I can then access at http://dockertest.com:8080. I'd like to be able to just go to http://dockertest.com without specifying the port. How can I achieve this? I know port numbers can't be used in the /etc/hosts file, so I'm looking for a way that would mimic the effect if it was possible. I need to be able to run multiple docker apps at the same time because some of the codebases communicate with one another and each need to have their own unique hostname, so I don't think simply setting ports to 80:80 in the docker-compose file will work because every app will be (attempting) to run on 127.0.0.1:80.

For context I've followed this tutorial for running apache, php and mysql on docker. All of my files are exactly as shown on that site.

Update

I'm getting a 502 Bad Gateway nginx error with the following docker-compose.yml file.

version: "3.3"
services:
  php:
    build: './php/'
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    build: './apache/'
    depends_on:
      - php
      - mysql
    networks:
      - frontend
      - backend
    volumes:
      - ./public_html/:/var/www/html/
    environment:
      - VIRTUAL_PORT=3000
      - VIRTUAL_HOST=dockertest.com
  mysql:
    image: mysql:5.6.40
    networks:
      - backend
    environment:
      - MYSQL_ROOT_PASSWORD=rootpassword
  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
networks:
  frontend:
  backend:

Update 2

Resolved the '502 Bad Gateway' error, here's the updated docker-compose.yml file. I had to add the nginx-proxy to one of the networks I referenced. My question isn't completely resolved, but I do have part of it working. For anyone reading this looking for a solution, I created another question here to prevent this one from getting too long.

version: "3.3"
services:
  php:
    build: './php/'
    networks:
      - backend
    volumes:
      - ./public_html/:/var/www/html/
  apache:
    build: './apache/'
    depends_on:
      - php
      - mysql
    networks:
      - frontend
      - backend
    volumes:
      - ./public_html/:/var/www/html/
    environment:
      - VIRTUAL_HOST=dockertest.com
  mysql:
    image: mysql:5.6.40
    networks:
      - backend
    environment:
      - MYSQL_ROOT_PASSWORD=rootpassword
  nginx-proxy:
    image: jwilder/nginx-proxy
    networks:
      - backend
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
networks:
  frontend:
  backend:

3条回答
倾城 Initia
2楼-- · 2019-07-27 10:22

One possibility is to set up all the applications in their separate containers and then connecting them via a docker network.

And in order to reach all of the containers I would suggest adding an nginx webserver container to the network as a reverse proxy, which you can then bind to port 80 of your machine.

You can then either define a location for every app separately ot define one generic location like

# sample.conf
server {
  listen 80 default_server;
  server_name ~ (?<docker_host_name>.+);
  location ~ {
    # for actual request forwarding
    proxy_pass                         http://$docker_host_name$1$is_args$args;
    # some stuff I figured out I have to use in order for service to work properly
    proxy_set_header                   Upgrade $http_upgrade;
    proxy_set_header                   Connection 'upgrade';
    proxy_http_version                 1.1;
    proxy_cache_bypass                 $http_upgrade;
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

This configuration either has to be placed inline in the original /etc/nginx/nginx.conf or in a separate file that is included inside the http config block.

After restarting the nginx service or container (depending on container setup) you should be able to reach all the services inside the docker network and all of the services should be able to communicate to each other without problems.

Of course you still have to keep the entries in the hosts file, so your computer knows that it has to process the request locally.

Edit:

The original config (probably) does not do what it is supposed to do. So, I came up with a newer version, which should get the job done:

# sample.conf
server {
  listen 80 default_server;
  location ~ {
    # for actual request forwarding
    proxy_pass                         http://$host$1$is_args$args;
    # some stuff I figured out I have to use in order for service to work properly
    proxy_set_header                   Upgrade $http_upgrade;
    proxy_set_header                   Connection 'upgrade';
    proxy_http_version                 1.1;
    proxy_cache_bypass                 $http_upgrade;
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

With this configuration, the nginx server will be listening to all incoming requests on port 80 and forward them to the proper container inside the network. You also do not have to configure the host resolution yourself, as docker container names also represent the host(-name) of a container.

Hopefully this works out for you.

查看更多
叛逆
3楼-- · 2019-07-27 10:41

You can use a jwilder/nginx-proxy, it's a reverse proxy auto-configured by the env vars of other containers, so you don't need to manually write nginx proxy configs. Also as requested, it allows to use specific port to forward requests to.

# docker-compose.yml

version: '3.3'

services:

  lamp:
    environment:
      VIRTUAL_HOST: some_domain.dev
      VIRTUAL_PORT: 9999
    image: my_lamp_image

  app:
    environment:
      VIRTUAL_HOST: another_domain.dev
      VIRTUAL_PORT: 3000
    image: my_app_image

  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
# /etc/hosts

127.0.0.1 some_domain.dev
127.0.0.1 another_domain.dev

jwilder/nginx-proxy has many more nice features like ssl, uwsgi, fastcgi and can also be used in production. There are also "companion" additions like let's encrypt ssl and man in the middle proxy.

查看更多
男人必须洒脱
4楼-- · 2019-07-27 10:46

It looks like your apache server runs on port 80 inside the container. If you want use dockertest.com outside with your /etc/hosts entry, than u have to use port 80 for outside also.

  1. make your /etc/hosts entry for the dockertest.com domain
  2. If you use docker, run it with -p 80:80or if you use docker-compose
ports:
  - "80:80"
查看更多
登录 后发表回答