I would like to utilize CPU cores from multiple nodes to execute a single R script. Each node contains 16 cores and are assigned to me via a Slurm tool.
So far my code looks like the following:
ncores <- 16
List_1 <- list(...)
List_2 <- list(...)
cl <- makeCluster(ncores)
registerDoParallel(cl)
getDoParWorkers()
foreach(L_1=List_1) %:%
foreach(L_2=List_2) %dopar% {
...
}
stopCluster(cl)
I execute it via the following command in a UNIX shell:
mpirun -np 1 R --no-save < file_path_R_script.R > another_file_path.Rout
That works fine on a single node. However, I have not figured out whether it is sufficient to increase ncores to 32 once I have access to a second node. Does R include the additional 16 cores on the other node automatically? Or do I have to make use of another R package?
Using mpirun
to launch an R script does not make sense without using Rmpi.
Looking at your code, you might do want you want to do without MPI. The recipe would be as follows to use 2x16 cores.
Ask for 2 tasks and 16 cpus per task
#SBATCH --nodes 2
#SBATCH --ntasks 2
#SBATCH --cpus-per-task 16
Start your program with Slurm's srun command
srun R --no-save < file_path_R_script.R > another_file_path.Rout
The srun
command will start 2 instances of the R script on two distinct nodes and will setup an environment variable SLURM_PROCID
to be 0 on one node and 1 on the other
Use the value of SLURM_PROCID
in your Rscript to split the work among the two processes started by srun
ncores <- 16
taskID <- as.numeric(Sys.getenv('SLURM_PROCID'))
List_1 <- list(...)
List_2 <- list(...)
cl <- makeCluster(ncores)
registerDoParallel(cl)
getDoParWorkers()
Lits_1 <- split(List_1, 1:2)[[taskID+1]] # Split work based on value of SLURM_PROCID
foreach(L_1=List_1) %:%
foreach(L_2=List_2) %dopar% {
...
}
stopCluster(cl)
You will need to save the result on disk and then merge the partial results into a single full result.