I'm trying to understand everything that happens in between the time a packet reaches the NIC until the time the packet is received by the target application.
Assumption: buffers are big enough to hold an entire packet. [I know it is not always the case, but I don't want to introduce too many technical details]
One option is:
1. Packet reaches the NIC.
2. Interrupt is raised.
2. Packet is transferred from the NIC buffer to OS's memory by means of DMA.
3. Interrupt is raised and the OS copies the packet from it's buffer to the relevant application.
The problem with the above is when there is a short burst of data and the kernel can't keep with the pace. Another problem is that every packet triggers an interrupt which sounds very inefficient to me.
I know that to solve at least one of the above problems there is a use of several buffers [ring buffer]. However I don't understand the mechanism which will allow to make this works. Suppose that:
1. Packet arrives to the NIC.
2. DMA is triggered and the packet is transfered to one of the buffers [from the ring buffer].
3. Handling of the packet is then scheduled for latter time [bottom half].
Will this work? Is this is what happened in the real NIC driver within the Linux kernel?