I don't really know much about sockets except how to read and write to them as if they were files. I know a little about using socket selectors. I don't get why you have to flush a socket, what's actually happening there? The bits just hang out somewhere in memory until they get pushed off? I read some things online about sockets, but it's all very abstract and high level.
What's actually happening here?
There's a certain amount of overhead involved in writing to a network socket and sending data. If data were sent every time byte entered the socket, you'd end up with 40+ bytes of TCP header for every byte of actual data. (Assuming you're using a TCP socket, of course. Other sockets will have different values). In order to avoid such inefficiency, the socket maintains a local buffer, which is usually somewhat over 1000 bytes. When that buffer is filled, a header is wrapped around the data and the packet is sent off to its destination.
In many cases, you don't need each packet to be sent immediately; if you're transferring a file, early data may not be of any use without the final data of the file, so this works well. If you need to force data to be sent immediately, however, flushing the buffer will send any data which has not yet been sent.
Note that when you close a socket, it automatically flushes any remaning data, so there's no need to flush before you close.
You cant really flush a socket.
(From How can I force a socket to send the data in its buffer?)
You can't force it. Period. TCP makes up its own mind as to when it
can send data. Now, normally when you call write() on a TCP socket,
TCP will indeed send a segment, but there's no guarantee and no way to
force this. There are lots of reasons why TCP will not send a
segment: a closed window and the Nagle algorithm are two things to
come immediately to mind.
Read the full post, it is quite in-depth and clarified a lot of the things for me.
Nagle's algorithm is often in use on sockets. In a nutshell, it waits until there's a non-trivial amount of data to send. The problem is to achieve a balance between transmission latency and the overhead cost of sending a packet of data.
The larger the data payload, the smaller the wasted bandwidth because the header is of (mostly) fixed size. Furthermore intermediate systems generally have performance limits based more significantly on the packet-rate, not the overall data-rate.
I haven't used flush for socket programming. I do remember somewhere that there are modes that you can set, where stream oriented sends try to reduce the number of small packets sent. However, you'd use a sendall() to make sure that everything in the buffer is sent.
The shutdown() function is very useful as you are spooling down. Of course you must use a close() call if you don't want stuff hanging around.
Have a look at Beej's guide for more information.