Is the behavior of MPI communication of a rank wit

2019-04-20 11:53发布

问题:

What happens if you use one of the MPI communication methods to have a rank communicate with itself? Is there a well-defined behavior (e.g. guaranteed to succeed or fail), or does it depend on chance/other uncontrollable influences, whether the program will continue to run or not?

An example would be a fluid dynamics code, where each rank determines which grid cells need to be sent to the neighboring ranks to create the necessary halo for the computational stencil. If the simulation is started just on one rank, there would be non-blocking send/receive of rank 0 with itself (sending around 0-length information).

回答1:

While you can avoid self-messaging as per suszterpatt's answer, self-messaging will work and is part of the MPI standard. There is even a pre-defined convenience communicator MPI_COMM_SELF. As long as the send/receive calls do not cause deadlock (for example, non-blocking calls are used), sending to self is fine. Of course, the send and receive buffers should not overlap.

Note that with OpenMPI you need to enable the self BTL.


Source: MPI 1.1 Section 3.2.4

Source = destination is allowed, that is, a process can send a message to itself. (However, it is unsafe to do so with the blocking send and receive operations described above, since this may lead to deadlock. See Sec. 3.5. Semantics of point-to-point communication.)



回答2:

In a standard mode send (i.e. MPI_Send()), it is up to the MPI implementation to determine whether to buffer the message or not. It is reasonable to assume that any implementation, or at least the popular ones, will recognize a send to self, and decide to buffer the message. Execution will then continue, and once the matching receive call is made, the message will be read from the buffer. If you want to be absolutely certain, you can use MPI_Bsend(), but then it may be your responsibility to manage the buffer via MPI_Buffer_attach() and MPI_Buffer_detach().

However, the ideal solution to your specific problem is using MPI_PROC_NULL in the source/destination argument of the send/receive calls, which will cause Send and Recv to forgo any communication and return ASAP.



标签: mpi