N
is 4, so is N_glob
. It happens to be of the same size. p
is 4.
Here is a small portion of the code:
float **global_grid;
float **gridPtr;
lengthSubN = N/pSqrt;
subN = lengthSubN + 2;
grid = allocate2D(grid, subN, subN);
..
MPI_Type_contiguous(lengthSubN, MPI_FLOAT, &rowType);
MPI_Type_commit(&rowType);
..
gridPtr = grid;
..
MPI_Barrier(MPI_COMM_WORLD);
if(id == 0) {
global_grid = allocate2D(global_grid, N_glob, N_glob);
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gather(&(gridPtr[0][0]), 1, rowType,
&(global_grid[0][0]), 1, rowType, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
if(id == 0)
print(global_grid, N_glob, N_glob);
where I have p
submatrices and I am trying to gather them all in the root process, where the global matrix waits for them. However, it will just throw an error, any ideas?
I am receiving a seg fault:
BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES PID 29058 RUNNING AT linux16 EXIT CODE: 139 YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
EDIT:
I found this question MPI_Gather segmentation fault and I initialized global_grid
to NULL, but no luck. However, if I do:
//if(id == 0) {
global_grid = allocate2D(global_grid, N_glob, N_glob);
//}
then everything works. But shouldn't the global matrix live only in the root process?
EDIT_2:
IF I do:
if(id == 0) {
global_grid = allocate2D(global_grid, N_glob, N_glob);
} else {
global_grid = NULL;
}
then it will crash here:
MPI_Gather(&gridPtr[0][0], 1, rowType,
global_grid[0], 1, rowType, 0, MPI_COMM_WORLD);
The variable
global_grid
is not initialized in ranks other than rank 0. Thus, this equationor this one:
leads to a segmentation fault, because it tries to access the first element of global_grid.
Just make two calls to
MPI_Gather
, one for rank 0 and one for the other ones: