I have a client and a server both written in C and running on Linux. The client requests data segments and sends similar data segments to the server. Here are the typical interactions between the client and the server.
- The clients tells the server to save some data (i.e. write request). The request is composed of 4KB of data and few additional bytes of meta data (2xunsigned long + 1xint). Ther server saves the data and does not respond to write requests.
- The clients requests data from the server (i.e. read request). The request is composed of few bytes of meta data (again ... 2xunsigned long + 1xint). The server responds with a 4KB data segment only.
The trace at the server side shows that it always sends 4KB data segments. However, the trace at the clients shows a different story: packets of different sizes. If at one point the client receives data of size other than 4KB then the following packet add up either to 4KB or 8KB.
To illustrate the faulty pattern here are some examples I saw in the trace:
- 4KB, 1200 Bytes, 2896 Bytes, 4KB.
- 4KB, 1448 Bytes, 6744 Bytes, 4KB.
I can probably deal with the first scenario (i.e. 1200B+2896B) at the application level by waiting for a complete 4KB segment to be read, but I do not know how to deal with the other. However, I would rather avoid the whole issue altogether and force the client/server receive full data segments of 4KBs each.
I have already tried disabling Nagle algorithm (TCP_NODELAY
) and setting the MTU size to 4KB. But neither one of those solved the issue.