I am working under the assumption that MPI processes operate on separate and unique data from start to finish, even on the same machine. However my code, which I expect to have a global object for each MPI process:
class global { // the class };
extern global obj;
global obj;
int main( int argc, char * argv[] ) {
MPI_Init();
// determine rank
std::cout << rank << " global object is at " << &obj << std::endl;
MPI_Finalize();
}
With -np 2, results in:
0 global object is at 0x620740
1 global object is at 0x620740
Could this be a source of segmentation fault or other errors, where the two MPI processes are accessing the same memory address on the same machine to get to its own global object?
EDIT: I should mention that 'global' in my intention is not global over all MPI processes, but global within each separate MPI process.
MPI starts multiple processes using the same executable file. Usually this results in those processes having the same initial memory layout, with only the location of the stack and the location where different shared libraries are mapped possibly being different. In your case obj
is an uninitialised static object and as such is placed in the BSS section, which is usually located right after the initialised data section. The amount of data is known in advance and the placement of the BSS section too - these are fixed by the linker. Therefore obj
has the same location in each process created from that executable.
This is not a problem for MPI since every process has its own virtual memory space and the addresses that you see are virtual addresses, valid only in the virtual memory space of the corresponding process. In other words 0x620740
in rank 0 and 0x620740
in rank 1 are completely different locations in the physical memory because both point to the same location in two different virtual memory spaces.
In general MPI does not have the notion of global (or shared) objects since by presumption every process in an MPI job only has access to its own isolated memory space. In reality processes can (and they usually do) share memory when they run on the same physical node (e.g. MPI usually sends messages using shared memory when the processes run on a multicore, multiprocessor, or other kind of shared memory machine), but unless you've taken special steps to put obj
in a specially created shared memory segment, it won't get shared.