I have a project I'm writing (in Java) for a class where the prof says we're not allowed to use more than 200m
I limit the stack memory to 50m (just to be absolutely sure) with -Xmx50m but according to top, it's still using 300m
I tried running Eclipse Memory Analyzer and it reports only 26m
Could this all be memory on the stack?, I'm pretty sure I never go further than about 300 method calls deep (yes, it is a recursive DFS search), so that would have to mean every stack frame is using up almost a megabyte which seems hard to believe.
The program is single-threaded. Does anyone know any other places in which I might reduce memory usage? Also, how can I check/limit how much memory the stack is using?
UPDATE: I'm using the following JVM options now with no effect (still about 300m according to top): -Xss104k -Xms40m -Xmx40m -XX:MaxPermSize=1k
Another UPDATE: Actually, if I let it run a little bit longer (with all these options) about half the time it suddenly drops to 150m after 4 or 5 seconds (the other half it doesn't drop). What makes this really strange is that my program has no stochastic (and as I said it's single-threaded) so there's no reason it should behave differently on different runs
Could it have something to do with the JVM I'm using?
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.3) (6b27-1.12.3-0ubuntu1~10.04)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
According to java -h, the default JVM is -server. I tried adding -cacao and now (with all the other options) it's only 59m. So I suppose this solves my problem. Can anyone explain why this was necessary? Also, are there any drawbacks I should know about?
One more update: cacao is really really slow compared to server. This is an awful option
Top command reflects the total amount of memory used by the Java application. This includes among other things:
- A basic memory overhead of the JVM itself
- the heap space (bounded with -Xmx)
- The permanent generation space (-XX:MaxPermSize - not standard in all JVMs)
- threads stack space (-Xss per stack) which may grow significantly depending on the number of threads
- Space used by native allocations (using ByteBufer class, or JNI)
Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]
here max heap memory as -Xmx ,min heap memory as -Xms,stack memory as -Xss
and -XX maxPermSize
The following example illustrates this situation. I have launched my tomcat with the following startup parameters:
-Xmx168m -Xms168m -XX:PermSize=32m -XX:MaxPermSize=32m -Xss1m
With -Xmx
you are configuring heap size. To configure stack size use -Xss
parameter. Sum of those two parameters should be approximately what you want:
-Xmx150m -Xss50m
for example.
Additionally there is also -XX:MaxPermSize
parameter which controls. This parameter for -client
has default value of 32mb and for -server
64mb. According to your configuration calculate it as well. PermGen space is:
The permanent generation is used to hold reflective of the VM itself such as class objects and method objects.
So basically it stores internal data of the JVM, like classes definitions and intern-ed strings.
At the end I must say that there is one part which you can't control, that is memory used by native java process. Java is program, just like any other, so it uses memory also. If you are watching memory usage in Task Manager you will see this memory as well together with your program memory consumption.
It's important to note that "total memory used" (RSS in Linux land) includes JDK heap (+ other JDK areas) as well as any "native memory" allocated.
For instance, these people found that allocating too many jaxbcontexts (which have associated native memory) between GC's could cause it to use a lot of extra RAM. Another common one is apparently ZipInflater if you don't call close on it (or GZipStream, etc.)
http://sleeplessinslc.blogspot.com/2014/08/jvm-native-memory-leak.html
His final workaround/fix was to either GC "more often" (by using GC1 garbage collector, or specifying a smaller [ironically] -Xmx setting) or by cacheing the JaxBContext objects (since they have no close method so you can't control the leak).
Also note that sometimes you can find memory culprits by just examing jstack: http://javaeesupportpatterns.blogspot.com/2011/09/jaxbcontext-performance-problem-case.html
It's also sometimes possible to "miss" closing for instance GZipStreams accidentally http://kohsuke.org/2011/11/03/quiz-time-memory-leak-in-java
Have you tried using JVisualVM?
http://docs.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
I've often found it helps me track this stuff down. It will show you how much of each kind of memory is being used in even let you drill in and find out what.