At first, I must ask that which is the best in which states ? For example a real-time MMORPG server. What if i create a thread per client instead of using non-blocking sockets ? Or What if i use one thread that contains all non-blocking sockets ? Can you explain me the advantages ?
相关问题
- Sorting 3 numbers without branching [closed]
- Multiple sockets for clients to connect to
- How to compile C++ code in GDB?
- Why does const allow implicit conversion of refere
- thread_local variables initialization
相关文章
- Class layout in C++: Why are members sometimes ord
- How to mock methods return object with deleted cop
- Which is the best way to multiply a large and spar
- C++ default constructor does not initialize pointe
- Selecting only the first few characters in a strin
- What exactly do pointers store? (C++)
- Converting glm::lookat matrix to quaternion and ba
- What is the correct way to declare and use a FILE
I will go on record as saying that for almost anything except toy programs, you should use non-blocking sockets as a matter of course.
Blocking sockets cause a serious problem: if the machine on the other end (or any part of your connection to it) fails during a blocking call, your code will end up blocked until the IP stack's timeout. In a typical case, that's around 2 minutes, which is completely unacceptable for most purposes. The only way1 to abort that blocking call is to terminate the thread that made it -- but terminating a thread is itself almost always unacceptable, as it's essentially impossible to clean up after it and reclaim whatever resources it had allocated. Non-blocking sockets make it trivial to abort a call when/if needed, without doing anything to the thread that made the call.
It is possible to make blocking sockets work sort of well if you use a multi-process model instead. Here, you simply spawn an entirely new process for each connection. That process uses a blocking socket, and when/if something goes wrong, you just kill the entire process. The OS knows how to clean up the resources from a process, so cleanup isn't a problem. It still has other potential problems though: 1) you pretty much need a process monitor to kill processes when needed, and 2) spawning a process is usually quite a bit more expensive than just creating a socket. Nonetheless, this can be a viable option, especially if:
1. Well, technically not the only possible way, but most of the alternatives are relatively ugly--to be more specific, I think by the time you add code to figure out that there's a problem, and then fix the problem, you've probably done more extra work than if you just use a non-blocking socket.
Your question deserves a much longer discussion but here's a short stab at an answer:
With non-blocking sockets (on Windows) you have a couple of options:
Overlapped I/O will give you the best performance (thousands of sockets / process) at the expense of being the most complicated model to understand and implement correctly.
Basically it comes down to performance vs. programming complexity.
NOTE
Here's a better explanation of why using a thread/socket model is a bad idea:
In windows, creating a large number of threads is highly inefficient because the scheduler is unable to properly determine which threads should be receiving processor time and which shouldn't. That coupled with the memory overhead of each thread means that you will run out of memory (because of stack space) and processor cycles (because of overhead in managing threads) at the OS level long before you will run out of capacity to handle socket connections.