Below, you see a python program that acts as a server listening for connection requests to port 9999:
# server.py
import socket
import time
# create a socket object
serversocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
# get local machine name
host = socket.gethostname()
port = 9999
# bind to the port
serversocket.bind((host, port))
# queue up to 5 requests
serversocket.listen(5)
while True:
# establish a connection
clientsocket,addr = serversocket.accept()
print("Got a connection from %s" % str(addr))
currentTime = time.ctime(time.time()) + "\r\n"
clientsocket.send(currentTime.encode('ascii'))
clientsocket.close()
The questions is what is the function of the parameter of socket.listen()
method (i.e. 5
).
Based on the tutorials around the internet:
The backlog argument specifies the maximum number of queued connections and should be at least 0; the maximum value is system-dependent (usually 5), the minimum value is forced to 0.
But:
- What are these queued connections?
- Does it make any difference for client requests? (I mean is the server that is running with
socket.listen(5)
different from the server that is running withsocket.listen(1)
in accepting connection requests or in receiving data?) - Why is the minimum value zero? Shouldn't it be at least
1
? - Is there a preferred value?
- Is this
backlog
defined for TCP connections only or does it apply for UDP and other protocols too?
NOTE : Answers are framed without having any background in Python, but, the questions are irrelevant to language, to be answered.
In simple words, the backlog parameter specifies the number of pending connections the queue will hold.
When multiple clients connect to the server, the server then holds the incoming requests in a queue. The clients are arranged in the queue, and the server processes their requests one by one as and when queue-member proceeds. The nature of this kind of connection is called queued connection.
Yes, both cases are different. The first case would allow only 5 clients to be arranged to the queue; whereas in the case of backlog=1, only 1 connection can be hold in the queue, thereby resulting in the dropping of the further connection request!
I have no idea about Python, but, as per this source, in C, a backlog argument of 0 may allow the socket to accept connections, in which case the length of the listen queue may be set to an implementation-defined minimum value.
This question has no well-defined answer. I'd say this depends on the nature of your application, as well as the hardware configurations and software configuration too. Again, as per the source,
BackLog
is silently limited to between 1 and 5, inclusive(again as per C).NO. Please note that there's no need to listen() or accept() for unconnected datagram sockets(UDP). This is one of the perks of using unconnected datagram sockets!
But, do keep in mind, then there are TCP based datagram socket implementations (called TCPDatagramSocket) too which have backlog parameter.
When TCP connection is being established the so called three-way handshake is performed. Both sides exchange some packets and once they do it this connection is called complete and it is ready to be used by the application.
However this three-way handshake takes some time. And during that time the connection is queued and this is the backlog. So you can set the maximum amount of incomplete parallel connections via
.listen(no)
call (note that according to the posix standard the value is only a hint, it may be totally ignored). If someone tries to establish a connection above backlog limit the other side will refuse it.So the backlog limit is about pending connections, not established.
Now higher backlog limit will be better in most cases. Note that the maximum limit is OS dependent, e.g.
cat /proc/sys/net/core/somaxconn
gives me128
on my Ubuntu.The function of the parameter appears to be to limit the number of incoming connect requests a server will retain in a queue assuming it can serve the current request and the small amount of queued pending requests in a reasonable amount of time while under high load. Here's a good paragraph I came against that lends a little context around this argument...
https://docs.python.org/3/howto/sockets.html#creating-a-socket
There's text earlier up in the document that suggests clients should dip in and out of a server so you don't build up a long queue of requests in the first place...
The linked HowTo guide is a must read when getting up to speed on network programming with sockets. It really brings into focus some big picture themes about it. Now how the server socket manages this queue as far as implementation details is another story, probably an interesting one. I suppose the motivation for this design is more telling, without it the barrier for inflicting a denial of service attack would be very very low.
As far as the reason for a minimum value of 0 vs 1, we should keep in mind that 0 is still a valid value, meaning queue up nothing. That is essentially to say let there be no request queue, just reject connections outright if the server socket is currently serving a connection. The point of a currently active connection being served should always be kept in mind in this context, it's the only reason a queue would be of interest in the first place.
This brings us to the next question regarding a preferred value. This is all a design decision, do you want to queue up requests or not? If so you may pick a value you feel is warranted based on expected traffic and known hardware resources I suppose. I doubt there's anything formulaic in picking a value. This makes me wonder how lightweight a request is in the first place that you'd face a penalty in queuing anything up on the server.
UPDATE
I wanted to substantiate the comments from user207421 and went to lookup the python source. Unfortunately this level of detail is not to be found in the sockets.py source but rather in socketmodule.c#L3351-L3382 as of hash 530f506.
The comments are very illuminating, I'll copy the source verbatim below and single out the clarifying comments here which are pretty illuminating...
and
Going further down the rabbithole into the externals I trace the following source from socketmodule...
This source is over at socket.h and socket.c using linux as a concrete platform backdrop for discussion purposes.
There's more info to be found in the man page
http://man7.org/linux/man-pages/man2/listen.2.html
And the corresponding docstring
One additional source identifies the kernel as being responsible for the backlog queue.
They briefly go on to relate how the unaccepted / queued connections are partitioned in the backlog (a useful figure is included on the linked source).