I'm trying to find the cause of memory leaks in my java application. I need to get a heap dump for a process that is in a long GC-cycle. Jmap isn't working in this case both because app is hanged and because heap is very large.
Unfortunately, jmap throws UnknownOopException on the core dump I took. I suppose that it isn't correct to take core dump during GC. Is there any way to suspend java process at the point where taking core dump will be correct?
Or am I totally wrong and got broken core dump because of some other problem?
What you need to do is take the heap dump before the heap is so close to full that the GC locks up the application.
You cannot take a heap dump while a GC is being performed. You need to take a heap dump before or after the GC. If you want to know why it is taking so long it is usueful to determine which pahse is taking so long. To see this to add
-verbosegc
This will indicate if it is taking a long time to reach a safe point, copy objects, scan the tenrured space, check references or something else.It could be taking along time because you have lots of objects to clean up. As a guessimate it can take about a worst case 1 second per 2 GB of heap objects.
In my experience, an OutOfMemory exception or long GC cycles do not indicate a memory leak for certain.
In order to search for a memory leak, take 2 separate heap dumps some time apart (I've used jvisualvm, nowadays a version is bundled in with the JDK) and analyze them. Hint: Inspecting retain size of objects helps.
Depending on what your application does and if an apparent memory leak does not turn up, tweaking JVM GC options is your best bet. Look for generation ratios, generations after a new object is tenured etc.
Hope this helps a bit.