Is non-blocking I/O really faster than multi-threa

2019-01-07 02:32发布

I searched the web on some technical details about blocking I/O and non blocking I/O and I found several people stating that non-blocking I/O would be faster than blocking I/O. For example in this document.

If I use blocking I/O, then of course the thread that is currently blocked can't do anything else... Because it's blocked. But as soon as a thread starts being blocked, the OS can switch to another thread and not switch back until there is something to do for the blocked thread. So as long as there is another thread on the system that needs CPU and is not blocked, there should not be any more CPU idle time compared to an event based non-blocking approach, is there?

Besides reducing the time the CPU is idle I see one more option to increase the number of tasks a computer can perform in a given time frame: Reduce the overhead introduced by switching threads. But how can this be done? And is the overhead large enough to show measurable effects? Here is an idea on how I can picture it working:

  1. To load the contents of a file, an application delegates this task to an event-based i/o framework, passing a callback function along with a filename
  2. The event framework delegates to the operating system, which programs a DMA controller of the hard disk to write the file directly to memory
  3. The event framework allows further code to run.
  4. Upon completion of the disk-to-memory copy, the DMA controller causes an interrupt.
  5. The operating system's interrupt handler notifies the event-based i/o framework about the file being completely loaded into memory. How does it do that? Using a signal??
  6. The code that is currently run within the event i/o framework finishes.
  7. The event-based i/o framework checks its queue and sees the operating system's message from step 5 and executes the callback it got in step 1.

Is that how it works? If it does not, how does it work? That means that the event system can work without ever having the need to explicitly touch the stack (such as a real scheduler that would need to backup the stack and copy the stack of another thread into memory while switching threads)? How much time does this actually save? Is there more to it?

7条回答
迷人小祖宗
2楼-- · 2019-01-07 03:04

The main reason to use AIO is for scalability. When viewed in the context of a few threads, the benefits are not obvious. But when the system scales to 1000s of threads, AIO will offer much better performance. The caveat is that AIO library should not introduce further bottlenecks.

查看更多
登录 后发表回答