Multiple server processes using nginx and uWSGI

2020-05-27 04:27发布

问题:

I've noticed that you can start multiple processes within one uWSGI instance behind nginx:

uwsgi --processes 4 --socket /tmp/uwsgi.sock

Or you can start multiple uWSGI instances on different sockets and load balance between them using nginx:

upstream my_servers {
    server unix:///tmp.uwsgi1.sock;
    server unix:///tmp.uwsgi2.sock;
    #...
}

What is the difference between these 2 strategies and is one preferred over the other?

How does load balancing done by nginx (in the first case) differ from load balancing done by uWSGI (in the second case)?

nginx can front servers on multiple hosts. Can uWSGI do this within a single instance? Do certain uWSGI features only work within a single uWSGI process (ie. shared memory/cache)? If so it might be difficult to scale from the first approach to the second one....

回答1:

The difference is that in the case of uWSGI there is no "real" load balancing. The first free process will always respond, so this approach is way better than having nginx load balacing between multiple instances (this is obviously true only for local instances). What you need to take in account is the "thundering herd problem". Its implications are exposed here: http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html.

Finally, all of the uWSGI features are multithread/multiprocess (and greenthreads) aware so the caching (for example) is shared by all processes.



标签: nginx uwsgi