I'm working on a debian server with tomcat 7 and java 1.7. This is an application that recieves several TCP connections, each TCP connection is an open file by the java process.
Looking at /proc/pid of java/fd
I found that, sometimes, the number of open files exceeds 1024, when this happens, I find in catalina.out
log the stacktrace _SocketException: Too many open files_
Everything I find about this error, people refer to the ulimit, I have already changed this thing and the error keeps happening. Here is the config:
at /etc/security/limits.conf
root soft nofile 8192
root hard nofile 8192
at /etc/sysctl.conf
fs.file-max = 300000
the ulimit -a
command returns:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But, when I check the limits of the java process, it's only 1024
at /proc/pid of java/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 32339 32339 processes
Max open files 1024 1024 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 32339 32339 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
How can I increase the number of Max open files
for the java process?
The ulimit values are assigned at session start-up time, so changing /etc/security/limits.conf will not have any effect to processes that are already running. Non-login processes will inherit the ulimit values from their parent, much like the inheritance of environment variables.
So after changing /etc/security/limits.conf, you'll need to logout & login (so that your session will have the new limits), and then restart the application. Only then will your application be able to use the new limits.
I just put the line
ulimit -n 8192
inside the catalina.sh, so when I do thecatalina start
, java runs with the specified limit above.Setting higher ulimit maybe completely unnecessary depending on the workload/traffic that the tomcat/httpd handles. Linux creates a file descriptor per socket connection, so if tomcat is configured to use mod_jk/ajp protocol as a connector then you may want to see if the maximum allowed connection is too high or if the connectionTimeout or keepAliveTimeout is too high. These parameters play a huge role in consumption of OS file descriptors. Sometimes it may also be feasible to limit the number of apache httpd/nginx connection if tomcat is fronted by a reverse proxy. I once reduce serverLimit value in httpd to throttle incoming requests during gaterush scenario. All in all adjusting ulimit may not be a viable option since your system may end up consuming however many you throw at it. You will have to come up with a holistic plan to solve this problem.