Performance and reliability when using multiple Do

2019-03-18 18:57发布

Performance and reliability when using multiple Docker containers VS standard Node cluster

Hi, I have a question regarding the performance, reliability and growth potential of two setups that I've encountered. I'm far from Docker or cluster expert, so any advise or tip would be really appreciated.

The app

Typical MEAN stack web application running on Node v6.9.4. Nothing fancy, standard setup.

The problem and possible solutions that I've found

a) Standard Linux server with NGINX (reverse proxy) and NodeJS

b) Standard Linux server with NGINX (reverse proxy) and NodeJS Cluster. Using Node's Cluster module

c) "Dockerized" NodeJS app cloned 3 times (3 containers) using NGINX's load balancer. Credit for the idea goes to Anand Sankar

// Example nginx load balance config
server app1:8000 weight=10 max_fails=3 fail_timeout=30s;
server app2:8000 weight=10 max_fails=3 fail_timeout=30s;
server app3:8000 weight=10 max_fails=3 fail_timeout=30s;

// Example docker-compose.yml
version: '2'
services:
    nginx:
        build: docker/definitions/nginx
        links:
            - app1:app1
            - app2:app2
            - app3:app3
        ports: 
            - "80:80"
    app1:
        build: app/.
    app2:
        build: app/.
    app3:
        build: app/.

d) All together. "Dockerized" NodeJS app (multiple containers) with Cluster configured inside and on top of the 3 containers - NGINX's load balancer.

If I get this correctly, having 3 x NodeJS containers running the app, where each of these app replicas support the NodeJS clustering, should lead to incredible performance.

3 x containers x 4 workers, should mean 12 nodes to handle all requests/responses. If that's correct, the only drawback would be the more powerful, in terms of hardware, machine to support this.

Anyway, my logic may be totally wrong, so I'm looking for any comments or feedback on that!

Goal

My goal is to have production ready, stable environments, which are ready to take some load. We're not speaking about thousands of concurrent connections at the same time, etc. Keeping the infrastructure scalable and flexible is a big "+".


Hopefully, the question makes sense. Sorry for the long post, but I wanted to keep it clear.

Thank you!

1条回答
The star\"
2楼-- · 2019-03-18 19:58

From my experience I feel options C or D are most maintainable, and assuming you had the resources available on the server D would likely be the most performant.

That said, have you looked into Kubernetes at all? I found there’s a slight learning curve, but it’s a great resource that allows for dynamic scaling, load balancing, and offers much smoother deployment options than Docker Compose. The biggest is hosting a Kubernetes cluster is more expensive than a single server.

查看更多
登录 后发表回答