I am having an issue with a JNI program randomly running out of memory.
This is a 32 bit java program which reads a file, does some image processing, typically using 250MB up to 1GB. All those objects are then discarded, and then the program makes a series of calls to a JNI program that typically needs 100-250MB.
When run interactively, I have never seen a problem. However, when running a batch operation that does this on many files in succession, the JNI program will randomly run out of memory. It may have a memory problem for one or two files, and then runs fine for the next 10 files, and then glitch again.
I have dumped the amount of free memory right before the JNI calls and it is all over the map, sometimes 100MB, sometimes 800MB. My interpretation is that Java garbage collection sometimes is run immediately after the image processing, and sometimes not. When it is not, then there may not be enough memory for the JNI program.
I have read all the stuff about GC being non deterministic, shouldn't call it, won't make any difference, etc. but it sure seems like forcing GC before starting the JNI calls would improve this situation.
But is there any way to really ensure that there is a certain amount of free memory before continuing?
To answer the questions about the JNI program, that is supplied by another company, and I have no real insight into how it allocates memory. All I know is that it is in c++, which has no garbage collection. And I have been told that it needs 100-250MB of memory, and the numbers I have seen would confirm that.
Maybe I should reword the question to be: If I am about to make a JNI call that I know will need 250MB of memory, how can I assure that it will have that much memory available?
And it certainly true that one possible solution would be to do a 64 bit build. However, this batch operation is part of QA on a 32 bit build, so I would like to be testing the real thing.