Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level protocol? I looked around the usual headers but couldn't find anything.
问题:
回答1:
For Unix-domain sockets, you can use fflush()
, but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:
At the end of your session, calling
shutdown(sock, SHUT_WR)
to close out writes on the socket.On TCP sockets, disabling the Nagle algorithm with sockopt
TCP_NODELAY
, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.
It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.
回答2:
What about setting TCP_NODELAY and than reseting it back? Probably it could be done just before sending important data, or when we are done with sending a message.
send(sock, "notimportant", ...);
send(sock, "notimportant", ...);
send(sock, "notimportant", ...);
int flag = 1;
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
send(sock, "important data or end of the current message", ...);
flag = 0;
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
As linux man pages says
TCP_NODELAY ... setting this option forces an explicit flush of pending output ...
So probably it would be better to set it after the message, but am not sure how it works on other systems
回答3:
In RFC 1122 the name of the thing that you are looking for is "PUSH". Though I do not know any TCP API that implements "PUSH".
Some answers and comments deal with the Nagle algorithm. Most of them seem to assume that the Nagle algorithm delays every send. This assumption is not correct. Nagle delays sending only when a previous packet has not yet been acknowledged (http://www.unixguide.net/network/socketfaq/2.11.shtml).
To simplify it a little bit: TCP tries to send the first packet (of a row of packets) immediately but delays subsequent packets until either a time-out is reached or the first packet is acknowledged – whichever occurs first.
One solution is to avoid these "subsequent packets". If your application calls send() more than one time to transmit a single compound request, try to rewrite your application. Assemble the request in user space, then call send()
. Once.
Besides, when the send buffer contains enough data to fill the maximum size of a network packet, Nagle does not delay either. This means, Nagle does not (really) delay a bulk send, even if you send()
the bulk data in small pieces.
回答4:
There is no way that I am aware of in the standard TCP/IP socket interface to flush the data "all the way through to the remote end" and ensure it has actually been acknowledged.
Generally speaking, if your protocol has a need for "real-time" transfer of data, generally the best thing to do is to set the setsockopt()
of TCP_NODELAY
. This disables the Nagle algorithm in the protocol stack and write() or send() on the socket more directly maps to sends out onto the network....instead of implementing send hold offs waiting for more bytes to become available and using all the TCP level timers to decide when to send. NOTE: Turning off Nagle does not disable the TCP sliding window or anything, so it is always safe to do....but if you don't need the "real-time" properties, packet overhead can go up quite a bit.
Beyond that, if the normal TCP socket mechanisms don't fit your application, then generally you need to fall back to using UDP and building your own protocol features on the basic send/receive properties of UDP. This is very common when your protocol has special needs, but don't underestimate the complexity of doing this well and getting it all stable and functionally correct in all but relatively simple applications. As a starting point, a thorough study of TCP's design features will shed light on many of the issues that need to be considered.
回答5:
I think it would be extremely difficult, if not impossible to implement correctly. What is the meaning of "flush" in this context? Bytes transmitted to network? Bytes acknowledged by receiver's TCP stack? Bytes passed on to receiver's user-mode app? Bytes completely processed by user-mode app?
Looks like you need to do it at the app level...
回答6:
TCP gives only best-effort delivery, so the act of having all the bytes leave Machine A is asynchronous with their all having been received at Machine B. The TCP/IP protocol stack knows, of course, but I don't know of any way to interrogate the TCP stack to find out if everything sent has been acknowledged.
By far the easiest way to handle the question is at the application level. Open a second TCP socket to act as a back channel and have the remote partner send you an acknowledgement that it has received the info you want. It will cost double but will be completely portable and will save you hours of programming time.
回答7:
You could set the tcp option SO_LINGER to set a certain timeout and then close the socket in order to make sure all data has been sent (or detect failure to do so) upon the closeing of a connection. Other than that, TCP is a "best effort" protocol, and it doesn't provide any real guarantees that data will ever actually reach the destination (in contrast to what some seems to believe), it just tries it best to get it delivered in correct order and as soon as possible.
回答8:
Use fsync():
sock_fd is integer file descriptor from socket(..., ...) call
fsync(sock_fd);