I have written two pair of codes(server.c
and client.c
) in Linux. One for UNIX-domain AF_UNIX
other for INTERNET-domain AF_INET
. Both are working fine!
listen()
is called for backlog queue length = 3 in both servers
listen(sockfd, 3);
In UNIX domain (AF_UNIX): While one client is connected with server, If I try to connect more clients to server. Three are kept in queue, and request of fourth is declined. (as I desired - 3 in waiting queue).
In INTERNET domain (AF_INET): Request of more than three are kept in a pending queue.
Why isn't a request from a fourth client rejected, even when the backlog queue length is three? And why is the behavior of listen()
(and others) protocol dependent?
Operating systems actually use larger queues for incoming TCP connections than the one specified to
listen()
. How much larger depends on the operating system.For a given listening socket kernal maintains two queue.
backlog argument historically specify sum of both queues. But there is no formal definition of what backlog means.
Berkeley-derived implementation add a fudge factor to the backlog. So total queue
length = factor * backlog
.A very detailed and deep explanation given in a book by W. Richard Stevens. Also a table showing the values for seven operating systems can be found in Stevens, Fenner, Rudoff, "Unix Network Programming: The Sockets Network API", Volume 1, Third Edition, Page 108.
The platform is entitled to adjust the specified backlog up or down, according to its minimum and its default. These days the default is more like 500 than five, which is where it started in about 1983. You can't rely on it being what you specified, and there is no API for finding out what it really is, and there is no apparent valid application reason for wanting it to be shorter than the default.