I have a handful of heap dumps that I am analyzing after the JVM has thrown OutOfMemory
exceptions. I'm using Hotspot JDK 1.7 (64bit) on a Windows 2008R2 platform. The application server is a JBoss 4.2.1GA, launched via the Tanuki Java Service Wrapper.
It is launched with the following arguments:
wrapper.java.additional.2=-XX:MaxPermSize=256m
wrapper.java.initmemory=1498
wrapper.java.maxmemory=3000
wrapper.java.additional.19=-XX:+HeapDumpOnOutOfMemoryError
which translate to:
-Xms1498m -Xmx3000m -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError
There are some other GC & JMX configuration parameters as well.
My issue is when I analyze a heap dump created due to an OutOfMemoryException
using the Eclipse Memory Analyzer, invariably, MAT shows me heap sizes of 2.3G or 2.4G. I have already enable the option in MAT to Keep Unreachable Objects
, so I don't believe that MAT is trimming the heap.
java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded
or
java.lang.OutOfMemoryError: Java heap space
Summary in MAT:
Size: 2.3 GB Classes: 21.7k Objects: 47.6m Class Loader: 5.2k
My actual heap file sizes are roughly 3300KB, so they are in line with my 3000m max heap size setting.
So where is the missing 500-600M of memory in MAT? Why does MAT only show my heap size as 2.4G?
Other posts on SO tend to indicate that it is the JVM doing some GC prior to dumping the heap, but if the missing 500M is due to a GC, why is it even throwing the OOM in the first place? If a GC could actually clear up 500M (or nearly 25% of my heap), is the JVM really out of memory?
Are there ways to tune the heap dumps so I can get a full/complete picture of the heap (including the missing 500M)?
If not, I find I'm really struggling to find how/why I'm encountering these OOMs in the first place.
As requested by someone, I am attaching the output of a jstat -gc <PID> 1000
from a live node: http://pastebin.com/07KMG1tr.