At first, I must ask that which is the best in which states ? For example a real-time MMORPG server. What if i create a thread per client instead of using non-blocking sockets ? Or What if i use one thread that contains all non-blocking sockets ? Can you explain me the advantages ?
问题:
回答1:
Your question deserves a much longer discussion but here's a short stab at an answer:
- using blocking sockets means that only one socket may be active at any time in any one thread (because it blocks while waiting for activity)
- using blocking sockets is generally easier than non-blocking sockets (asynchronous programming tends to be more complicated)
- you could create 1 thread per socket as you stated but threads have overhead and are extremely inefficient compared to the non-blocking solutions;
- with non-blocking sockets you could handle a much larger volume of clients: it could scale to hundreds of thousands in a single process - but the code becomes a little bit more complicated
With non-blocking sockets (on Windows) you have a couple of options:
- polling
- events based
- overlapped I/O
Overlapped I/O will give you the best performance (thousands of sockets / process) at the expense of being the most complicated model to understand and implement correctly.
Basically it comes down to performance vs. programming complexity.
NOTE
Here's a better explanation of why using a thread/socket model is a bad idea:
In windows, creating a large number of threads is highly inefficient because the scheduler is unable to properly determine which threads should be receiving processor time and which shouldn't. That coupled with the memory overhead of each thread means that you will run out of memory (because of stack space) and processor cycles (because of overhead in managing threads) at the OS level long before you will run out of capacity to handle socket connections.
回答2:
I will go on record as saying that for almost anything except toy programs, you should use non-blocking sockets as a matter of course.
Blocking sockets cause a serious problem: if the machine on the other end (or any part of your connection to it) fails during a blocking call, your code will end up blocked until the IP stack's timeout. In a typical case, that's around 2 minutes, which is completely unacceptable for most purposes. The only way1 to abort that blocking call is to terminate the thread that made it -- but terminating a thread is itself almost always unacceptable, as it's essentially impossible to clean up after it and reclaim whatever resources it had allocated. Non-blocking sockets make it trivial to abort a call when/if needed, without doing anything to the thread that made the call.
It is possible to make blocking sockets work sort of well if you use a multi-process model instead. Here, you simply spawn an entirely new process for each connection. That process uses a blocking socket, and when/if something goes wrong, you just kill the entire process. The OS knows how to clean up the resources from a process, so cleanup isn't a problem. It still has other potential problems though: 1) you pretty much need a process monitor to kill processes when needed, and 2) spawning a process is usually quite a bit more expensive than just creating a socket. Nonetheless, this can be a viable option, especially if:
- You're dealing with a small number of connections at a time
- You generally do extensive processing for each connection
- You're dealing only with local hosts so your connection to them is fast and dependable
- You're more concerned with optimizing development than execution
1. Well, technically not the only possible way, but most of the alternatives are relatively ugly--to be more specific, I think by the time you add code to figure out that there's a problem, and then fix the problem, you've probably done more extra work than if you just use a non-blocking socket.