I am having an issue with a JNI program randomly running out of memory.
This is a 32 bit java program which reads a file, does some image processing, typically using 250MB up to 1GB. All those objects are then discarded, and then the program makes a series of calls to a JNI program that typically needs 100-250MB.
When run interactively, I have never seen a problem. However, when running a batch operation that does this on many files in succession, the JNI program will randomly run out of memory. It may have a memory problem for one or two files, and then runs fine for the next 10 files, and then glitch again.
I have dumped the amount of free memory right before the JNI calls and it is all over the map, sometimes 100MB, sometimes 800MB. My interpretation is that Java garbage collection sometimes is run immediately after the image processing, and sometimes not. When it is not, then there may not be enough memory for the JNI program.
I have read all the stuff about GC being non deterministic, shouldn't call it, won't make any difference, etc. but it sure seems like forcing GC before starting the JNI calls would improve this situation.
But is there any way to really ensure that there is a certain amount of free memory before continuing?
To answer the questions about the JNI program, that is supplied by another company, and I have no real insight into how it allocates memory. All I know is that it is in c++, which has no garbage collection. And I have been told that it needs 100-250MB of memory, and the numbers I have seen would confirm that.
Maybe I should reword the question to be: If I am about to make a JNI call that I know will need 250MB of memory, how can I assure that it will have that much memory available?
And it certainly true that one possible solution would be to do a 64 bit build. However, this batch operation is part of QA on a 32 bit build, so I would like to be testing the real thing.
The following assumes you're using the hotspot jvm.
32bit processes are not just constrained by committed memory, far more importantly they're constrained by virtual memory, i.e. reserved address space. On 64bit systems you only have 4GB worth of addresses that can be used, on 32bit systems it's only 2-3GB.
The JVM will reserve a fixed, possibly large amount of address space for the managed heap up front, then dynamically allocates some internal structures on top of that amount and then possibly even more for DirectByteBuffers or memory-mapped files. This can leave very little room for native code to run.
Use Native Memory Tracking to determine how much various parts of the JVMs are using and
pmap <pid>
to check for memory-mapped files. Then try to limit that without hampering your application.Alternatively you could spawn a new process and do the image processing there.
My own approach to this problem is simply to call
System.gc()
, but from inside the native code:I hope this works for you, too.
FWIW (and I realize this is kind of heresy) adding a call to
before the first JNI call for each file made a dramatic improvement to this situation. Instead of getting memory errors on 20% of the files it is now less than 5%. Even better the errors are no longer random but are repeatable from run to run so presumably they can be tracked down.