How to make an nginx reverse-proxy load balance be

2019-06-25 04:00发布

I try to make an nginx reverse proxy load balance between two containers with the same nodejs app.

The directory structure:

.
+-- docker-compose.yml
+-- nginx
+-- nodejs
|   +-- index.js
|   +-- …
+-- php

docker-compose.yml:

version: "3.1"

services:

  nginx-proxy:
    image: nginx:alpine
    ports:
      - "8000:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    links:
      - php:php-app
      - nodejs:nodejs-app

  nodejs:
    image: node:alpine
    environment: 
      NODE_ENV: production
    working_dir: /home/app
    restart: always
    volumes:
      - ./nodejs:/home/app
    command: ["node", "index.js"]

  php:
    image: php:apache
    volumes:
      - ./php:/var/www/html

index.js listens at port 8080

nginx conf default.conf:

upstream nodejs-upstream {
  server nodejs-app:8080;
}

server {
  listen 80;
  root /srv/www;

  location / {
    try_files $uri @nodejs;  
  }

  location @nodejs {
    proxy_pass http://nodejs-upstream:8080;
    proxy_set_header Host $host;
  }

  location /api {
    proxy_pass http://php-app:80/api;
    proxy_set_header Host $host;
  }
}

Now I start the app with

docker-compose up  --scale nodejs=2

Does it load-balance?

  • I don't think so because the two instances of the nodejs app listen on the same port 8080.

How is it possible to make the nginx server load-balance between the two instances of the nodejs app?

Is there a better way to do this?


EDIT 1

I am still curious to know how to do that without jwilder/nginx-proxy. Thanks


EDIT 2

I have something which kind of works with:

default.conf:

upstream nodejs-upstream {
  server nodejs_1:8080;
  server nodejs_2:8080;
}

This works while the two nodejs containers are up. When I docker stop nodejs_2, the app is still available (load-balancing seems to work), but the request can be really slow to end up (up to 1min on localhost). If I restart this container, it works fine again…

1条回答
神经病院院长
2楼-- · 2019-06-25 04:23

Q. Is there a better way to do this?

Yes, IMO. Use jwilder/nginx approach. It is automatically updated and discover any new container, adding it to its balancing pool.

https://github.com/jwilder/nginx-proxy

version: "3.1"

services:

  nginx-proxy:
    image: jwilder/nginx
    ports:
      - "8000:80"
     volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

  nodejs:
    image: node:alpine
    environment: 
      NODE_ENV: production
      VIRTUAL_ENV: localhost
      VIRTUAL_PORT: 8080
    working_dir: /home/app
    restart: always
    volumes:
      - ../nodejs:/home/app
    command: ["node", "index.js"]

Nginx wil be automatically updated when scaling. Note the VIRTUAL_ENV env var in the app service. That var will be read from nginx. And you don't need any further config. (None .conf file)

查看更多
登录 后发表回答