Our recent observation on our production system, tells us the resident memory usage of our Java container grows up. Regarding to this problem, we have made some investigations to understand, why java process consumes much more memory than Heap + Thread Stacks + Shared Objects + Code Cache + etc, using some native tools like pmap. As a result of this, we found some 64M memory blocks (in pairs) allocated by native process (probably with malloc/mmap) :
0000000000400000 4K r-x-- /usr/java/jdk1.7.0_17/bin/java
0000000000600000 4K rw--- /usr/java/jdk1.7.0_17/bin/java
0000000001d39000 4108K rw--- [ anon ]
0000000710000000 96000K rw--- [ anon ]
0000000715dc0000 39104K ----- [ anon ]
00000007183f0000 127040K rw--- [ anon ]
0000000720000000 3670016K rw--- [ anon ]
00007fe930000000 62876K rw--- [ anon ]
00007fe933d67000 2660K ----- [ anon ]
00007fe934000000 20232K rw--- [ anon ]
00007fe9353c2000 45304K ----- [ anon ]
00007fe938000000 65512K rw--- [ anon ]
00007fe93bffa000 24K ----- [ anon ]
00007fe940000000 65504K rw--- [ anon ]
00007fe943ff8000 32K ----- [ anon ]
00007fe948000000 61852K rw--- [ anon ]
00007fe94bc67000 3684K ----- [ anon ]
00007fe950000000 64428K rw--- [ anon ]
00007fe953eeb000 1108K ----- [ anon ]
00007fe958000000 42748K rw--- [ anon ]
00007fe95a9bf000 22788K ----- [ anon ]
00007fe960000000 8080K rw--- [ anon ]
00007fe9607e4000 57456K ----- [ anon ]
00007fe968000000 65536K rw--- [ anon ]
00007fe970000000 22388K rw--- [ anon ]
00007fe9715dd000 43148K ----- [ anon ]
00007fe978000000 60972K rw--- [ anon ]
00007fe97bb8b000 4564K ----- [ anon ]
00007fe980000000 65528K rw--- [ anon ]
00007fe983ffe000 8K ----- [ anon ]
00007fe988000000 14080K rw--- [ anon ]
00007fe988dc0000 51456K ----- [ anon ]
00007fe98c000000 12076K rw--- [ anon ]
00007fe98cbcb000 53460K ----- [ anon ]
I interpret the line with 0000000720000000 3670016K refers to the heap space, of which size we define using JVM parameter "-Xmx". Right after that, the pairs begin, of which sum is 64M exactly. We are using CentOS release 5.10 (Final) 64-bit arch and JDK 1.7.0_17 .
The question is, what are those blocks? Which subsystem does allocate these?
Update: We do not use JIT and/or JNI native code invocations.
I ran in to the same problem. This is a known problem with glibc >= 2.10
The cure is to set this env variable
export MALLOC_ARENA_MAX=4
IBM article about setting MALLOC_ARENA_MAX https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
Google for MALLOC_ARENA_MAX or search for it on SO to find a lot of references.
You might want to tune also other malloc options to optimize for low fragmentation of allocated memory:
It's also possible that there is a native memory leak. A common problem is native memory leaks caused by not closing a
ZipInputStream
/GZIPInputStream
.A typical way that a
ZipInputStream
is opened is by a call toClass.getResource
/ClassLoader.getResource
and callingopenConnection().getInputStream()
on thejava.net.URL
instance or by callingClass.getResourceAsStream
/ClassLoader.getResourceAsStream
. One must ensure that these streams always get closed.You can use jemalloc to debug native memory leaks by enabling malloc sampling profiling by specifying the settings in
MALLOC_CONF
environment variable. Detailed instructions are available in this blog post: http://www.evanjones.ca/java-native-leak-bug.html . This blog post also has information about using jemalloc to debug a native memory leak in java applications.The same blog also contains information about another native memory leak related to ByteBuffers.