Socket accept - “Too many open files”

2019-01-07 04:43发布

I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a "Too many open files" error.

After I am done with dealing with request, I always do a close() on the socket. I have tried to use the shutdown() function as well, but nothing seems to help. Any way around this?

标签: c sockets
10条回答
一夜七次
2楼-- · 2019-01-07 05:08

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

You can check the following:

cat /proc/sys/fs/file-max

That will give you the system wide limits of file descriptors.

On the shell level, this will tell you your personal limit:

ulimit -n

This can be changed in /etc/security/limits.conf - it's the nofile param.

However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.

查看更多
Root(大扎)
3楼-- · 2019-01-07 05:10

Use lsof -u `whoami` | wc -l to find how many open files the user has

查看更多
放荡不羁爱自由
4楼-- · 2019-01-07 05:15

TCP has a feature called "TIME_WAIT" that ensures connections are closed cleanly. It requires one end of the connection to stay listening for a while after the socket has been closed.

In a high-performance server, it's important that it's the clients who go into TIME_WAIT, not the server. Clients can afford to have a port open, whereas a busy server can rapidly run out of ports or have too many open FDs.

To achieve this, the server should never close the connection first -- it should always wait for the client to close it.

查看更多
劫难
5楼-- · 2019-01-07 05:16

Just another information about CentOS. In this case, when using "systemctl" to launch process. You have to modify the system file ==> /usr/lib/systemd/system/processName.service .Had this line in the file :

LimitNOFILE=50000

And just reload your system conf :

systemctl daemon-reload
查看更多
何必那么认真
6楼-- · 2019-01-07 05:20

This means that the maximum number of simultaneously open files.

Solved:

At the end of the file /etc/security/limits.conf you need to add the following lines:

* soft nofile 16384
* hard nofile 16384

In the current console from root (sudo does not work) to do:

ulimit -n 16384

Although this is optional, if it is possible to restart the server.

In /etc/nginx/nginx.conf file to register the new value worker_connections equal to 16384 divide by value worker_processes.

If not did ulimit -n 16384, need to reboot, then the problem will recede.

PS:

If after the repair is visible in the logs error accept() failed (24: Too many open files):

In the nginx configuration, propevia (for example):

worker_processes 2;

worker_rlimit_nofile 16384;

events {
  worker_connections 8192;
}
查看更多
爷、活的狠高调
7楼-- · 2019-01-07 05:26

it can take a bit of time before a closed socket is really freed up

lsof to list open files

cat /proc/sys/fs/file-max to see if there's a system limit

查看更多
登录 后发表回答