In socket programming, you create a listening socket and then for each client that connects, you get a normal stream socket that you can use to handle the client's request. The OS manages the queue of incoming connections behind the scenes.
Two processes cannot bind to the same port at the same time - by default, anyway.
I'm wondering if there's a way (on any well-known OS, especially Windows) to launch multiple instances of a process, such that they all bind to the socket, and so they effectively share the queue. Each process instance could then be single threaded; it would just block when accepting a new connection. When a client connected, one of the idle process instances would accept that client.
This would allow each process to have a very simple, single-threaded implementation, sharing nothing unless through explicit shared memory, and the user would be able to adjust the processing bandwidth by starting more instances.
Does such a feature exist?
Edit: For those asking "Why not use threads?" Obviously threads are an option. But with multiple threads in a single process, all objects are shareable and great care has to be taken to ensure that objects are either not shared, or are only visible to one thread at a time, or are absolutely immutable, and most popular languages and runtimes lack built-in support for managing this complexity.
By starting a handful of identical worker processes, you would get a concurrent system in which the default is no sharing, making it much easier to build a correct and scalable implementation.
Another approach (that avoids many complex details) in Windows if you are using HTTP, is to use HTTP.SYS. This allows multiple processes to listen to different URLs on the same port. On Server 2003/2008/Vista/7 this is how IIS works, so you can share ports with it. (On XP SP2 HTTP.SYS is supported, but IIS5.1 does not use it.)
Other high level APIs (including WCF) make use of HTTP.SYS.
Not sure how relevant this to the original question, but in Linux kernel 3.9 there is a patch adding a TCP/UDP feature: TCP and UDP support for the SO_REUSEPORT socket option; The new socket option allows multiple sockets on the same host to bind to the same port, and is intended to improve the performance of multithreaded network server applications running on top of multicore systems. more information can be found in the LWN link LWN SO_REUSEPORT in Linux Kernel 3.9 as mentioned in the reference link:
the SO_REUSEPORT option is non-standard, but available in a similar form on a number of other UNIX systems (notably, the BSDs, where the idea originated). It seems to offer a useful alternative for squeezing the maximum performance out of network applications running on multicore systems, without having to use the fork pattern.
Have a single task whose sole job is to listen for incoming connections. When a connection is received, it accepts the connection - this creates a separate socket descriptor. The accepted socket is passed to one of your available worker tasks, and the main task goes back to listening.
Starting with Linux 3.9, you can set the SO_REUSEPORT on a socket and then have multiple non-related processes share that socket. That's simpler than the prefork scheme, no more signal troubles, fd leak to child processes, etc.
Linux 3.9 introduced new way of writing socket servers
The SO_REUSEPORT socket option
It sounds like what you want is one process listening on for new clients and then hand off the connection once you get a connection. To do that across threads is easy and in .Net you even have the BeginAccept etc. methods to take care of a lot of the plumbing for you. To hand off the connections across process boundaries would be complicated and would not have any performance advantages.
Alternatively you can have multiple processes bound and listening on the same socket.
If you fire up two processes each executing the above code it will work and the first process seems to get all the connections. If the first process is killed the second one then gets the connections. With socket sharing like that I'm not sure exactly how Windows decides which process gets new connections although the quick test does point to the oldest process getting them first. As to whether it shares if the first process is busy or anything like that I don't know.
I would like to add that the sockets can be shared on Unix/Linux via AF__UNIX sockets (inter-process sockets). What seems to happen is a new socket descriptor is created that is somewhat of an alias to the original one. This new socket descriptor is sent via the AFUNIX socket to the other process. This is especially useful in cases where a process cannot fork() to share it's file descriptors. For example, when using libraries that prevent against this due to threading issues. You should create a Unix domain socket and use libancillary to send over the descriptor.
See:
For creating AF_UNIX Sockets:
For example code: