I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a "Too many open files" error.
After I am done with dealing with request, I always do a close()
on the socket. I have tried to use the shutdown()
function as well, but nothing seems to help. Any way around this?
There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.
You can check the following:
That will give you the system wide limits of file descriptors.
On the shell level, this will tell you your personal limit:
This can be changed in /etc/security/limits.conf - it's the nofile param.
However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.
Use
lsof -u `whoami` | wc -l
to find how many open files the user hasTCP has a feature called "TIME_WAIT" that ensures connections are closed cleanly. It requires one end of the connection to stay listening for a while after the socket has been closed.
In a high-performance server, it's important that it's the clients who go into TIME_WAIT, not the server. Clients can afford to have a port open, whereas a busy server can rapidly run out of ports or have too many open FDs.
To achieve this, the server should never close the connection first -- it should always wait for the client to close it.
Just another information about CentOS. In this case, when using "systemctl" to launch process. You have to modify the system file ==> /usr/lib/systemd/system/processName.service .Had this line in the file :
And just reload your system conf :
This means that the maximum number of simultaneously open files.
Solved:
At the end of the file
/etc/security/limits.conf
you need to add the following lines:In the current console from root (sudo does not work) to do:
Although this is optional, if it is possible to restart the server.
In
/etc/nginx/nginx.conf
file to register the new valueworker_connections
equal to16384
divide by valueworker_processes
.If not did
ulimit -n 16384
, need to reboot, then the problem will recede.PS:
If after the repair is visible in the logs
error accept() failed (24: Too many open files)
:In the nginx configuration, propevia (for example):
it can take a bit of time before a closed socket is really freed up
lsof
to list open filescat /proc/sys/fs/file-max
to see if there's a system limit