Whenever I try to finalize my mpi program, i get errors similar to the following.
[mpiexec] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:70): assert (!(pollfds[i].revents & ~POLLIN & ~POLLOUT & ~POLLHUP)) failed
[mpiexec] main (./pm/pmiserv/pmip.c:221): demux engine error waiting for event
[mpiexec] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:99): one of the processes terminated badly; aborting
[mpiexec] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:18): bootstrap device returned error waiting for completion
[mpiexec] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:352): bootstrap server returned error waiting for completion
[mpiexec] main (./ui/mpich/mpiexec.c:294): process manager error waiting for completion
Sometimes, it gets a glibc "double free or corruption" error instead. Each process is single-threaded, and each process is for sure calling MPI_Finalize(). Any idea what could be going wrong here?
I've written a small test programm that should exit without any errors. Please try to run it. If it exits gracefully, then the problem is with your code.
#include <mpi.h>
#include <cstdio>
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int my_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
int finalize_retcode = MPI_Finalize();
if(0 == my_rank) fprintf(stderr, "Process, return_code\n");
fprintf(stderr, "%i, %i\n", my_rank, finalize_retcode);
return 0;
}
I just ran into a similar problem.
MPI_Request* req = (MPI_Request*) malloc(sizeof(MPI_Request)*2*numThings*numItems);
int count;
for( item in items ) {
count = 0;
for( thing in things ) {
MPI_Irecv(<sendBufF>, 1, MPI_INT, <src>, <tag>, MPI_COMM_WORLD, &req[count++]);
MPI_Isend(<recvBufF>, 1, MPI_INT, <dest>, <tag>, MPI_COMM_WORLD, &req[count++]);
}
}
MPI_Status* stat = (MPI_Status*) malloc(sizeof(MPI_Status)*2*numThings*numItems);
MPI_Waitall(count, req, stat);
The call to MPI_Waitall(...)
is made with a value of count
that is less then the number of Isend and recv's performed; which results in messages not being received. Moving count=0
outside the for loops resolved the MPI_Finalize(...)
error.