Should I have separate containers for Flask, uWSGI

2019-03-27 19:03发布

问题:

I intend to use Kubernetes and Ingress for load balancing. I'm trying to learn how to set up Flask, uWSGI and Nginx. I see this tutorial that has all three installed in the same container, and I'm wondering whether I should use it or not. https://ianlondon.github.io/blog/deploy-flask-docker-nginx/

I'm guessing the benefit of having them as separate containers and separate pods is that they can then all scale individually?

But also, should Flask and uwsgi even be in separate containers? (or Flask and Gunicorn, since uwsgi seems to be very similar to Gunicorn)

回答1:

Flask is a web framework, any application written with it needs a WSGI server to host it. Although you could use the Flask builtin developer server, you shouldn't as that isn't suitable for production systems. You therefore need to use a WSGI server such as uWSGI, gunicorn or mod_wsgi (mod_wsgi-express). Since the web application is hosted by the WSGI server, it can only be in the same container, but there isn't a separate process for Flask, it runs in the web server process.

Whether you need a separate web server such as nginx then depends. In the case of mod_wsgi you don't as it uses the Apache web server and so draws direct benefits from that. When using mod_wsgi-express it also is already setup to run in an optimal base configuration and how it does that avoids the need to have a separate front facing web server like people often do with nginx when using uWSGI or gunicorn.

For containerised systems, where the platform already provides a routing layer for load balancing, as is the case for ingress in Kubernetes, using nginx in the mix could just add extra complexity you don't need and could reduce performance. This is because you either have to run nginx in the same container, or create a separate container in the same pod and use shared emptyDir volume type to allow them to communicate via a UNIX socket still. If you don't use a UNIX socket, and use INET socket, or run nginx in a completely different pod, then it is sort of pointless as you are introducing an additional hop for traffic which is going to be more expensive than having it closely bound using a UNIX socket. The uWSGI server doesn't perform as well when accepting requests over INET when coupled with nginx, and having nginx in a separate pod, potentially on different host, can make that worse.

Part of the reason for using nginx in front is that it can protect you from slow clients due to request buffering, as well as other potential issues. When using ingress though, you already have a haproxy or nginx front end load balancer that can to a degree protect you from that. So it is really going to depend on what you are doing as to whether there is a point in introducing an additional nginx proxy in the mix. It can be simpler to just put gunicorn or uWSGI directly behind the load balancer.

Suggestions are as follows.

  • Also look at mod_wsgi-express. It was specifically developed with containerised systems in mind to make it easier, and can be a better choice than uWSGI and gunicorn.

  • Test different WSGI servers and configurations with your actual application with real world traffic profiles, not benchmarks which just overload it. This is important as the dynamics of a Kubernetes based system, along with how its routing may be implemented, means it all could behave a lot differently to more traditional systems you may be used to.