I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a "Too many open files" error.
After I am done with dealing with request, I always do a close()
on the socket. I have tried to use the shutdown()
function as well, but nothing seems to help. Any way around this?
I had similar problem. Quick solution is :
explanation is as follows - each server connection is a file descriptor. In CentOS, Redhat and Fedora, probably others, file user limit is 1024 - no idea why. It can be easily seen when you type: ulimit -n
Note this has no much relation to system max files (/proc/sys/fs/file-max).
In my case it was problem with Redis, so I did:
in your case instead of redis, you need to start your server.
I had this problem too. You have a file handle leak. You can debug this by printing out a list of all the open file handles (on POSIX systems):
By dumping out all the open files you will quickly figure out where your file handle leak is.
If your server spawns subprocesses. E.g. if this is a 'fork' style server, or if you are spawning other processes ( e.g. via cgi ), you have to make sure to create your file handles with "cloexec" - both for real files and also sockets.
Without cloexec, every time you fork or spawn, all open file handles are cloned in the child process.
It is also really easy to fail to close network sockets - e.g. just abandoning them when the remote party disconnects. This will leak handles like crazy.
When your program has more open descriptors than the open files ulimit (ulimit -a will list this), the kernel will refuse to open any more file descriptors. Make sure you don't have any file descriptor leaks - for example, by running it for a while, then stopping and seeing if any extra fds are still open when it's idle - and if it's still a problem, change the nofile ulimit for your user in /etc/security/limits.conf
I had the same problem and I wasn't bothering to check the return values of the close() calls. When I started checking the return value, the problem mysteriously vanished.
I can only assume an optimisation glitch of the compiler (gcc in my case), is assuming that close() calls are without side effects and can be omitted if their return values aren't used.