We have an application with hundreds of possible user actions, and think about how enhancing memory leak testing.
Currently, here's the way it happens: When manually testing the software, if it appears that our application consumes too much memory, we use a memory tool, find the cause and fix it. It's a rather slow and not efficient process: the problems are discovered late and it relies on the good will of one developer.
How can we improve that?
- Internally check that some actions (like "close file") do recover some memory and log it?
- Assert on memory state inside our unit tests (but it seems this would be a tedious task) ?
- Manually regularly check it from time to time?
- Include that check each time a new user story is implemented?
In my company we have programmed an endless action path for our application. The java garbage collector should clean all unused maps and list and something like that. So we let the application start with the endless action path and look, whether the memory use size is growing.
The check which fields are not deleted you can use JProfiler for Java.
Which language?
I'd use a tool such as Valgrind, try to fully exercise the program and see what it reports.
first line of defense:
second line of defense:
If you work with unmanaged language (like C/C++) you can efficiently discover most of the memory leaks by hijacking memory management functions. For example you can track all memory allocations/deallocations.
Replace new and delete with your custom versions and log every act of allocation/deallocation.
Speaking generally (not about testing, rather to fight the issue in its origin), smartpointers help to avoid this problem. Fortunately, C++11 standard provides new convenient smart pointer classes (
shared_ptr
,unique_ptr
).It seems to me that the core of the problem is not so much finding memory leaks as knowing when to test for them. You say you have lots of user actions, but you don't say what sequences of user actions are meaningful. If you can generate meaningful sequences at random, I'd argue hard for random testing. On random tests you would measure
gcov
orvalgrind
)valgrind
)By "coverage of user actions" I mean statements like the following:
If that's not true, then you can ask for what fraction of pairs A and B it is true.
If you have the CPU cycles to afford it, you would probably also benefit from running
valgrind
or another memory-checking tool either before every commit to your source-code repository or during a nightly build.Automate!