I was reading a comment about server architecture.
http://news.ycombinator.com/item?id=520077
In this comment, the person says 3 things:
- The event loop, time and again, has been shown to truly shine for a high number of low activity connections.
- In comparison, a blocking IO model with threads or processes has been shown, time and again, to cut down latency on a per-request basis compared to an event loop.
- On a lightly loaded system the difference is indistinguishable. Under load, most event loops choose to slow down, most blocking models choose to shed load.
Are any of these true?
And also another article here titled "Why Events Are A Bad Idea (for High-concurrency Servers)"
http://www.usenix.org/events/hotos03/tech/vonbehren.html
Typically, if the application is expected to handle million of connections, you can combine multi-threaded paradigm with event-based.
- First, spawn as N threads where N == number of cores/processors on your machine. Each thread will have a list of asynchronous sockets that it's supposed to handle.
- Then, for each new connection from the acceptor, "load-balance" the new socket to the thread with the fewest socket.
- Within each thread, use event-based model for all the sockets, so that each thread can actually handle multiple sockets "simultaneously."
With this approach,
- You never spawn a million threads. You just have as many as as your system can handle.
- You utilize event-based on multicore as opposed to a single core.
Not sure what you mean by "low activity", but I believe the major factor would be how much you actually need to do to handle each request. Assuming a single-threaded event-loop, no other clients would get their requests handled while you handled the current request. If you need to do a lot of stuff to handle each request ("lots" meaning something that takes significant CPU and/or time), and assuming your machine actually is able to multitask efficiently (that taking time does not mean waiting for a shared resource, like a single CPU machine or similar), you would get better performance by multitasking. Multitasking could be a multithreaded blocking model, but it could also be a single-tasking event loop collecting incoming requests, farming them out to a multithreaded worker factory that would handle those in turn (through multitasking) and sending you a response ASAP.
I don't believe slow connections with the clients matter that much, as I would believe the OS would handle that efficiently outside of your app (assuming you do not block the event-loop for multiple roundtrips with the client that initially initiated the request), but I haven't tested this myself.