I am writing some code that uses MPI and I was keeping noticing some memory leaks when running it with valgrind. While trying to identify where the problem was, I ended up with this simple (and totally useless) main:
#include "/usr/include/mpi/mpi.h"
int main(int argc,char** argv)
{
MPI_Init(&argc, &argv);
MPI_Finalize();
return 0;
}
As you can see, this code doesn't do anything and shouldn't create any problem. However, when I run the code with valgrind (both in the serial and parallel case), I get the following summary:
==28271== HEAP SUMMARY:
==28271== in use at exit: 190,826 bytes in 2,745 blocks
==28271== total heap usage: 11,214 allocs, 8,469 frees, 16,487,977 bytes allocated
==28271==
==28271== LEAK SUMMARY:
==28271== definitely lost: 5,950 bytes in 55 blocks
==28271== indirectly lost: 3,562 bytes in 32 blocks
==28271== possibly lost: 0 bytes in 0 blocks
==28271== still reachable: 181,314 bytes in 2,658 blocks
==28271== suppressed: 0 bytes in 0 blocks
I don't understand why there are these leaks. Maybe it's just me not able to read the valgrind output or to use MPI initialization/finalization correctly...
I am using OMPI 1.4.1-3 under ubuntu on a 64 bit architecture, if this can help.
Thanks a lot for your time!
The OpenMPI FAQ addresses issues with valgrind. This refers initalization issues and memory leaks during finalization - which should have no practical negative impact.
You're not doing anything wrong. Memcheck false positives with valgrind are common, the best you can do is suppress them.
This page of the manual speaks more about these false positives. A quote near the end: