Is it good idea to use MPI_Barrier() to synchronize data in-between iteration steps. Please see below pseudo code.
While(numberIterations< MaxIterations)
{
MPI_Iprobe() -- check for incoming data
while(flagprobe !=0)
{
MPI_Recv() -- receive data
MPI_Iprobe() -- loop if more data
}
updateData() -- update myData
for(i=0;i<N;i++) MPI_Bsend_init(request[i]) -- setup request
for(i=0;i<N;i++) MPI_Start(request[i]) -- send data to all other N processors
if(numberIterations = MaxIterations/2)
MPI_Barrier() -- wait for all processors -- CAN I DO THIS
numberIterations ++
}
Your code will deadlock, with or without a barrier. You receive in every rank before sending any data, so none of the ranks will ever get to a send call. Most applications will have a call such as
MPI_Allreduce
instead of a barrier after each iteration so all ranks can decide whether an error level is small enough, a task queue is empty, etc. and thus decide whether to terminate.Barriers should only be used if the correctness of the program depends on it. From your pseudocode, I can't tell if that's the case, but one barrier halfway through a loop looks very suspect.
In this article http://static.msi.umn.edu/rreports/2008/87.pdf it says that you have to call
MPI_Free_request()
beforeMPI_Bsend_init()
.