I just looked into wikipedia's entry on out-of-band data and as far as I understand, OOB data is somehow flagged more important and treated as ordinary data, but transmitted in a seperate stream, which profoundly confuses me.
The actual question would be (besides "Could someone explain what OOB data is?"):
I'm writing a unix application that uses sockets and need to make use of select() and was wondering what to do with the exceptfds parameter? Do I need to put all my sockets into this parameter and react to such events? Or do I just ignore them?
I think I found the answer on this page. In short:
I don't need to handle OOB data on the receiving side if I'm not sending any OOB data. I had thought that OOB data could be generated by the OS of the sender.
You don't need to handle it at the receiving end even if you are sending it - OOB data is transparently ignored in all circumstances unless you actively go about receiving it.
I know you've decided you don't need to handle OOB data, but here are some things to keep in mind if you ever do care about OOB...
If this seems a bit confusing and worthless, that's because it mostly is. There are good reasons to use OOB, but it's rare. One example is FTP, where the user may be in the middle of a large transfer but decide to abort. The abort is sent as OOB data. At that point the server and client just eat any further "normal" data to drain anything that's still in transit. If the abort were handled inline with the data then all the outstanding traffic would have to be processed, only to be dumped.
It's good to be aware that OOB exists and the basics of how it works, just in case you ever do need it. But don't bother learning it inside-out unless you're just curious. Chances are decent you may never use it.