I wonder when do I need to use barrier? Do I need it before/after a scatter/gather for example? Or should OMPI ensure all processes have reached that point before scatter/gather-ing? Similarly, after a broadcast can I expect all processes to already receive the message?
相关问题
- How to account for clock offsets in a distributed
- About MPI_Reduce
- Boost MPI doesn't free resources when listenin
- How to: Azure OpenMPI with Infiniband - Linux
- Why does the lock insure that the underlying monit
相关文章
- MPI and D: Linker Options
- Java synchronized block vs concurrentHashMap vs Co
- How does the piggybacking of current thread variab
- C pthread synchronize function
- wait for promises in onbeforeunload
- Can you select which column to sync with Sync Fram
- Waiting for condition in Java
- How to Synchronize SQLServer Database and MySQL Da
All collective operations in MPI before MPI-3.0 are blocking, which means that it is safe to use all buffers passed to them after they return. In particular, this means that all data was received when one of these functions returns. (However, it does not imply that all data was sent!) So MPI_Barrier is not necessary (or very helpful) before/after collective operations, if all buffers are valid already.
Please also note, that MPI_Barrier does not magically wait for non-blocking calls. If you use a non-blocking send/recv and both processes wait at an MPI_Barrier after the send/recv pair, it is not guaranteed that the processes sent/received all data after the MPI_Barrier. Use MPI_Wait (and friends) instead. So the following piece of code contains errors:
Both lines that are marked with
(!)
are unsafe!MPI_Barrier is only useful in a handful of cases. Most of the time you do not care whether your processes sync up. Better read about blocking and non-blocking calls!
One use of
MPI_Barrier
is for example to control access to an external resource such as the filesystem, which is not accessed using MPI. For example, if you want each process to write stuff to a file in sequence, you could do it like this:That way, you can be sure that no two processes are concurrently calling
writeStuffToTheFile
.May MPI_Barrier() is not often used, but it is useful. In fact, even if you were use the synchronous communication, the MPI_Send/Recv() can only make sure the two processes is synchronized. In my project, a cuda+MPI project, all i used is asynchronous communication. I found that in some cases if i dont use the MPI_Barrier() followed by the Wait() function, the situation that two processes(gpu) want to transmit data to each other at the same time is very likely to happen, which could badly reduce the program efficiency. The bug above ever divers me mad and take me a few days to find it. Therefore you may think carefully whether use the MPI_Barrier() when you used the MPI_Isend/Irecv in your program. Sometimes sync the processes is not only neccessary but also MUST, especially ur program is dealing with the device.