I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy. So my compose file look like this:
version: '2'
services:
app:
restart: always
nginx:
restart: always
ports:
- "80:80"
Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".
Is there a way to tell docker-compose load balancer to apply sticky sessions?
Another solution - is there a way to solve it using nginx?
Possible solution that I don't like:
multiple definitions of app
version: '2'
services:
app1:
restart: always
app2:
restart: always
nginx:
restart: always
ports:
- "80:80"
(And then on nginx config file I can define sticky sessions between app1 and app2).
Best result I got from searching: https://github.com/docker/dockercloud-haproxy
But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.
I wish docker would just allow configuring it with simple line in the compose file.
Thanks!
Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the
VIRTUAL_HOST
variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use theIP_HASH
directive on a container level to enable sticky sessions.Consider the following Compose file:
Let's get it up and running and then scale
app
to three instances:If you check the nginx configuration file you'll see something like this:
The
nginx
container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the
USE_IP_HASH
environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.