Is there a way to flush a POSIX socket?

2019-01-07 14:18发布

Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level protocol? I looked around the usual headers but couldn't find anything.

标签: c sockets posix
8条回答
女痞
2楼-- · 2019-01-07 14:46

There is no way that I am aware of in the standard TCP/IP socket interface to flush the data "all the way through to the remote end" and ensure it has actually been acknowledged.

Generally speaking, if your protocol has a need for "real-time" transfer of data, generally the best thing to do is to set the setsockopt() of TCP_NODELAY. This disables the Nagle algorithm in the protocol stack and write() or send() on the socket more directly maps to sends out onto the network....instead of implementing send hold offs waiting for more bytes to become available and using all the TCP level timers to decide when to send. NOTE: Turning off Nagle does not disable the TCP sliding window or anything, so it is always safe to do....but if you don't need the "real-time" properties, packet overhead can go up quite a bit.

Beyond that, if the normal TCP socket mechanisms don't fit your application, then generally you need to fall back to using UDP and building your own protocol features on the basic send/receive properties of UDP. This is very common when your protocol has special needs, but don't underestimate the complexity of doing this well and getting it all stable and functionally correct in all but relatively simple applications. As a starting point, a thorough study of TCP's design features will shed light on many of the issues that need to be considered.

查看更多
女痞
3楼-- · 2019-01-07 14:48

For Unix-domain sockets, you can use fflush(), but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:

  1. At the end of your session, calling shutdown(sock, SHUT_WR) to close out writes on the socket.

  2. On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.

It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.

查看更多
ら.Afraid
4楼-- · 2019-01-07 14:51

In RFC 1122 the name of the thing that you are looking for is "PUSH". Though I do not know any TCP API that implements "PUSH".

Some answers and comments deal with the Nagle algorithm. Most of them seem to assume that the Nagle algorithm delays every send. This assumption is not correct. Nagle delays sending only when a previous packet has not yet been acknowledged (http://www.unixguide.net/network/socketfaq/2.11.shtml).

To simplify it a little bit: TCP tries to send the first packet (of a row of packets) immediately but delays subsequent packets until either a time-out is reached or the first packet is acknowledged – whichever occurs first.

One solution is to avoid these "subsequent packets". If your application calls send() more than one time to transmit a single compound request, try to rewrite your application. Assemble the request in user space, then call send(). Once.

Besides, when the send buffer contains enough data to fill the maximum size of a network packet, Nagle does not delay either. This means, Nagle does not (really) delay a bulk send, even if you send() the bulk data in small pieces.

查看更多
看我几分像从前
5楼-- · 2019-01-07 14:55

Use fsync():

sock_fd is integer file descriptor from socket(..., ...) call

fsync(sock_fd);

查看更多
叛逆
6楼-- · 2019-01-07 14:56

I think it would be extremely difficult, if not impossible to implement correctly. What is the meaning of "flush" in this context? Bytes transmitted to network? Bytes acknowledged by receiver's TCP stack? Bytes passed on to receiver's user-mode app? Bytes completely processed by user-mode app?

Looks like you need to do it at the app level...

查看更多
对你真心纯属浪费
7楼-- · 2019-01-07 15:04

You could set the tcp option SO_LINGER to set a certain timeout and then close the socket in order to make sure all data has been sent (or detect failure to do so) upon the closeing of a connection. Other than that, TCP is a "best effort" protocol, and it doesn't provide any real guarantees that data will ever actually reach the destination (in contrast to what some seems to believe), it just tries it best to get it delivered in correct order and as soon as possible.

查看更多
登录 后发表回答