R + Fortran + MPI memory not mapped error

2019-08-02 16:42发布

问题:

I'm trying to use, in R, a Fortran module that uses MPI.

This is the Fortran module:

Module Fortranpi
USE MPI
IMPLICIT NONE
contains
subroutine dboard(darts, dartsscore)
integer, intent(in)                    :: darts
double precision, intent(out)          :: dartsscore
double precision                       :: x_coord, y_coord
integer                                :: score, n

score = 0
do n = 1, darts
call random_number(x_coord)
call random_number(y_coord)

if ((x_coord**2 + y_coord**2) <= 1.0d0) then
score = score + 1
end if
end do

dartsscore = 4.0d0*score/darts

end subroutine dboard

subroutine MPIpi(avepi, DARTS, ROUNDS) bind(C, name="pi2_")
use, intrinsic                         :: iso_c_binding, only : c_double, c_int
real(c_double), intent(out)            :: avepi
integer(c_int), intent(in)             :: DARTS, ROUNDS
integer                                :: i, n, mynpts, ierr, numprocs, proc_num
integer, allocatable                   :: seed(:)
double precision                       :: pi_est, y, sumpi

call mpi_init(ierr)
call mpi_comm_size(MPI_COMM_WORLD, numprocs, ierr)
call mpi_comm_rank(MPI_COMM_WORLD, proc_num, ierr)

if (numprocs .eq. 0) then
mynpts = ROUNDS - (numprocs-1)*(ROUNDS/numprocs)
else
  mynpts = ROUNDS/numprocs
endif

! initialize the random number generator
! we make sure the seed is different for each task
call random_seed()
call random_seed(size = n)
allocate(seed(n))
seed = 12 + proc_num*11
call random_seed(put=seed(1:n))
deallocate(seed)

y=0.0d0
do i = 1, mynpts
call dboard(darts, pi_est)
y = y + pi_est
end do

call mpi_reduce(y, sumpi, 1, mpi_double_precision, mpi_sum, 0, &
                  mpi_comm_world, ierr)
if (proc_num==0) avepi = sumpi/ROUNDS
call mpi_finalize(ierr)
end subroutine MPIpi

end module Fortranpi

I can compile it with:

mpif90 -fpic -shared -o Fpi.so Fpi.f90

This is the R code I'm trying to run:

# SPMD-style program: start all workers via mpirun
library(Rmpi)
dyn.load("Fpi.so")
DARTS=5000
ROUNDS=1000
MyPi <- .Fortran("pi2", avepi = as.numeric(1), DARTS =  as.integer(DARTS), ROUNDS =  as.integer(ROUNDS))$avepi
saveRDS(MyPi, file = 'MyPi.RDS')

# Finalize MPI and quit
mpi.quit()

This is what I get when I run it:

$ mpirun -n 2 R --slave -f MyPi.R

*** caught segfault ***
  address 0x44000098, cause 'memory not mapped'
--------------------------------------------------------------------------
  Calling MPI_Init or MPI_Init_thread twice is erroneous.
--------------------------------------------------------------------------

  Traceback:
  1: .Fortran("pi2", avepi = as.numeric(1), DARTS = as.integer(DARTS),     ROUNDS = as.integer(ROUNDS))
aborting ...

*** caught segfault ***
  address 0x44000098, cause 'memory not mapped'

Traceback:
  1: .Fortran("pi2", avepi = as.numeric(1), DARTS = as.integer(DARTS),     ROUNDS = as.integer(ROUNDS))
aborting ...
--------------------------------------------------------------------------
  mpirun noticed that process rank 1 with PID 6400 on node 2d60fd60575b exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
  2 total processes killed (some possibly by mpirun during cleanup)
$ 

What am I doing wrong?

回答1:

Don't do

library(Rmpi)
dyn.load("Fpi.so")

Put your code into a package, install it on all nodes, and have it loaded on all nodes. I like

clusterEvalQ(cl, library(myPackage))

for that to ensure it is loaded (where cl is a snow cluster object).

I also prefer r or Rscript as the scripting frontend ...