I am using 6.0.20 I have a number of web apps running on the server, over time, approximately 3 days and the server needs restarting otherwise the server crashes and becomes unresponsive.
I have the following settings for the JVM:
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
This provides me with a hprof file which I have loaded using Java VisualVM which identifies the following:
byte[] 37,206 Instances | Size 86,508,978
int[] 540,909 Instances | Size 55,130,332
char[] 357,847 Instances | Size 41,690,928
The list goes on, but how do I determine what is causing these issues?
I am using New Relic to monitor the JVM and only one error seems to appear but it's a reoccurring one, org.apache.catalina.connector. ClientAbortException. Is it possible that when a user session is aborted, any database connections or variables created are not being closed and are therefore left orphaned?
There is a function which is used quite heavily throughout each web app, not sure if this has any bearing on the leak:
public static String replaceCharacters(String s)
{
s = s.replaceAll(" ", " ");
s = s.replaceAll(" ", "_");
s = s.replaceAll("\351", "e");
s = s.replaceAll("/", "");
s = s.replaceAll("--", "-");
s = s.replaceAll("&", "and");
s = s.replaceAll("&", "and");
s = s.replaceAll("__", "_");
s = s.replaceAll("\\(", "");
s = s.replaceAll("\\)", "");
s = s.replaceAll(",", "");
s = s.replaceAll(":", "");
s = s.replaceAll("\374", "u");
s = s.replaceAll("-", "_");
s = s.replaceAll("\\+", "and");
s = s.replaceAll("\"", "");
s = s.replaceAll("\\[", "");
s = s.replaceAll("\\]", "");
s = s.replaceAll("\\*", "");
return s;
}
Is it possible that when a user connection is aborted, such as a user browser closed or the users has left the site that all variables, connections, etc... are purged/released, but isn't GC supposed to handled that?
Below are my JVM settings:
-Dcatalina.base=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Dcatalina.home=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20
-Djava.endorsed.dirs=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\endorsed
-Djava.io.tmpdir=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\temp
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.util.logging.config.file=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\conf\logging.properties
-Dfile.encoding=UTF-8
-Dsun.jnu.encoding=UTF-8
-javaagent:c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\newrelic\newrelic.jar
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=c:\tomcat\Websites\private\mydomain\apache-tomcat-6.0.20\logs
-Dcom.sun.management.jmxremote.port=8086
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false vfprintf
-Xms1024m
-Xmx1536m
Am I missing anything? The server has 3GB ram.
Any help would be much appreciated :-)
Start server in local dev environment, attach profiler (yourkit preferably), Take the heap dump periodically, You will see growth in object byte[] and you can actually connect those
byte[]
with your application class leaking it with this tool that will help you idenfity defect in codeI migrated all projects to Tomcat 7.0.42 and my errors have disappeared, our websites are far more stable and slightly faster, we are using less memory and cpu usage is far better.
You need to use a dump analyser that allows you to see what is making these objects reachable. Pick an object, and see what other object or objects refer to it ... and work backwards through the chains until you find either a "GC root" or or some application-specific class that you recognise.
Here are a couple of references on analysing memory snapshots and memory profilers:
Once you have identified that, you've gone most of the way to identifying the source of your storage leak.
That function has no direct bearing on the leak. It certainly won't cause it. (It could generate a lot of garbage String objects ... but that's a different issue.)