When profiling Java application I note interesting fact. When JVM is in GC spiral of death thread dump is looks like:
"1304802943@qtp-393978767-9985" prio=10 tid=0x00007f3ed02dd000 nid=0x74e7 in Object.wait() [0x000000004febb000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- locked <0x00000007aed40048> (a org.mortbay.thread.QueuedThreadPool$PoolThread)
"26774405@qtp-393978767-9984" prio=10 tid=0x00007f3ee4b37000 nid=0x74e6 in Object.wait() [0x0000000045d1a000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- locked <0x00000007aed83aa0> (a org.mortbay.thread.QueuedThreadPool$PoolThread)
"764808089@qtp-393978767-9983" prio=10 tid=0x00007f3ee4c50000 nid=0x74e5 in Object.wait() [0x000000004ad6a000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
- locked <0x00000007aed5c448> (a org.mortbay.thread.QueuedThreadPool$PoolThread)
So, there are a lot of threads in TIMED_WAITING
state. Theoretically this situation could be easily found in normally functioning application (application simply doesn't have any incoming requests at the moment), but I can't find even single request dispatching thread doing something useful (nominal hit rate is about 100 hps).
Does this behavior have something to di with GC, or it's just coincidence?
Answering just the question's title:
What does thread dump look like when JVM spent time in GC?
The answer is: you have no means to obtain such dump (in a usual way).
JVM processes the request for thread dump only after reaching safepoint which just can't happen while in GC.
But there is a cheat way to obtain active GC's thread dump with help of undocumented JVMTI function AsyncGetCallTrace which is mentioned in this post:
http://jeremymanson.blogspot.com/2010/07/why-many-profilers-have-serious.html
It also hints that Oracle Solaris Studio can be used to take such mixed native/java thread dumps.
Try a jmap -histo:live over time, you can compare output, see which object types are growing.
You need to have the JDK installed for jmap.
http://docs.oracle.com/javase/6/docs/technotes/tools/share/jmap.html
A warning, jmap is intensive, it will pause all threads while it's running, which should only be a few seconds. Processes can core dump because it's intensive, generally it's quick and safe, but I have see it lock up or kill large applications, multi-gig heaps.
My guess is you have a thread pool which is waiting for something to do. If your process is efficient and you have even 100 requests per second you may have trouble catching even one thread doing something. I suggest you look at the CPU load of your process. If its 50%, you have a 50% chance of finding one thread (possibly not a request thread) doing something.
If you want to see what your server spends its time doing, I would try a profiler like VisualVM, or a commercial profiler like YourKit.
Doing a google search for you code, I found a different version http://grepcode.com/file/repo1.maven.org/maven2/org.mortbay.jetty/jetty-util/7.0.0.pre5/org/mortbay/thread/QueuedThreadPool.java however I suspect your threads are TIMED_WAIT in this block int he run() method
// We are idle
// wait for a dispatched job
synchronized (this)
{
if (_job==null)
this.wait(getMaxIdleTimeMs());
job=_job;
_job=null;
}