I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:
/etc/security/limits.conf
* soft nofile 65000
* hard nofile 65000
/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000
ulimit -n
//output 6500
So i think my system configure it's right.
My service is manage by supervisor, it's possible supervisor limits?
check process start by supervisor:
cat /proc/815/limits
Max open files 1024 4096 files
check process manual start:
cat /proc/900/limits
Max open files 65000 65000 files
The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.
May be supervisor start level is too high and system configure does not work when supervisor start?
edit:
system: ubuntu 12.04 64bit
It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.
update
Maybe the problem is:
- https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/938669
- http://bryanmarty.com/blog/2012/02/10/setting-nofile-limit-upstart/
- Increase max open files for Ubuntu/Upstart (initctl)
Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.
I had the same problem. Even though ulimit -Sn
shows my new limit, running supervisorctl restart all
and cat
ing the proc files did not show the new limits.
The problem is that supervisord
still has the original limits. Therefore any child processes it creates still have the original limits.
So, the solution is to kill and restart supervisord
.
Fixed this issue by setting the limits for all users in the file :
$ cat /etc/security/limits.d/custom.conf
* hard nofile 550000
* soft nofile 550000
REBOOT THE SERVER after setting the limits.
VERY IMPORTANT:
The /etc/security/limits.d/
folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/
and in the file /etc/security/limits.conf
.
CAUTION:
Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.
Hope this saves someone some hair - as I spent too much time pulling my hair out chunk by chunk!
To any weary googlers: you might be looking for the minfds
setting in the supervisor config. This setting seems to take effect for both the supervisord process as well as the children. I had a number of other strategies, including launching a shell script that set the limits before executing the actual program, but this was the only thing that worked.
Try to edit /etc/sysctl.conf and adjust the limits globally
For example:
Forces the limit to 100000 files.
vi /etc/sysctl.conf
Append:
fs.file-max = 100000
Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:
sysctl -p
You can find you limit with:
cat /proc/sys/fs/file-max
or sysctl -a | grep file
change it in /proc/sys/fs/file-max file or with:
sysctl -w fs.file-max=100000
luqmaan's answer was the ticket for me, except for one small caveat: the *
wildcard doesn't apply to root in Ubuntu (as described in limits.conf
's comments).
You need to explicitly set the limit for root if supervisord
is started as the root user:
vi /etc/security/limits.conf
root soft nofile 65535
root hard nofile 65535
Can you set the limit on Service on this way:
add: LimitNOFILE=65536
in: /etc/systemd/system/{NameofService}.service
I think this has nothing to do with opened files(It's just wrong error message). Any port that your application uses in use.
1. Try to find the process ID with command
ps aux
2. Kill the process (for example 8572) with command
sudo kill -9 8572
3. Start your application again.