How does NetworkStream work in two directions?

2019-07-24 19:53发布

问题:

I've read an example of a Tcp Echo Server and some things are unclear to me.

TcpClient client = null;
NetworkStream netStream = null;

try {
  client = listener.AcceptTcpClient(); 
  netStream = client.GetStream();

  int totalBytesEchoed = 0;
  while ((bytesRcvd = netStream.Read(rcvBuffer, 0, rcvBuffer.Length)) > 0) {
    netStream.Write(rcvBuffer, 0, bytesRcvd);
    totalBytesEchoed += bytesRcvd;
  }

  netStream.Close();
  client.Close();
} catch {
  netStream.Close();
}

When the server receives a packet (the while loop), he reads the data into rcvBuffer and writes it to the stream.

What confuses me is the chronological order of messages in communication. Is the data which was written with netStream.Write() sent immediately to the client (who may even still be sending), or only after the data which is already written to the stream (by client) processed.

The following question may even clarify the previous: If a client sends some data by writing to the stream, is that data moved to the message queue on the server side waiting to be read so the stream is actually "empty"? That would explain why the server can immediately write to stream - because the data which comes from the stream is actually buffered elsewhere...?

回答1:

Hint: The method call NetworkStream.Read is blocking in that example.

The book is absolutely correct -- raw access to TCP streams does not imply any sort of extra "chunking" and, in this example for instance, a single byte could easily be processed at a time. However, performing the reading and writing in batches (normally with exposed buffers) can allow for more efficient processing (often as a result of less system calls). The network layer and network hardware also employ there own forms of buffers.

There is actually no guarantee that data written from Write() will actually be written before more Reads() successfully complete: even if data is flushed in one layer it does not imply it is flushed in another and there is absolutely no guarantee that the data has made its way back over to the client. This is where higher-level protocols come into play.

With this echo example the data is simply shoved through as fast as it can be. Both the Write and the Read will block based upon the underlying network stack (the send and receive buffers in particular), each with their own series of buffers.

[This simplifies things a bit of course -- one could always look at the TCP [protocol] itself which does impose transmission characteristics on the actual packet flow.]



回答2:

A TCP connection is, in principal, full duplex. So you are dealing with 2 separate channels and yes, both sides could be writing at the same time.



回答3:

You are right that technically when performing Read() operation, you are not reading bits off the wire. You are basically reading buffered data (chunks received by a TCP and arranged in a correct order). When sending you can Flush() that should in theory should send data immediately, but modern TCP stacks have a bit of logic how to gather data in appropriate size packets and burst them to the wire.

As Henk Holterman explained, TCP is a full duplex protocol (if supported by all underlying infrastructure), so sending and receiving data is more of when you server/client reads and writes data. It's not like when you server send data, a client will read it immediately. Client can be sending it's own data and then perform Read(), in this case data will stay in network buffer longer and can be discarded after some time it no-one want to read it. At least I've experienced this when dealing with my supa dupa server/client library (-.