starting slurm array job with a specified number o

2019-09-02 10:25发布

问题:

I’m trying to align 168 sequence files on our HPC using slurm version 14.03.0. I’m only allowed to use a maximum of 9 compute nodes at once to keep some nodes open for other people.

I changed the file names so I could use the array function in sbatch. The sequence files look like this: Sequence1.fastq.gz, Sequence2.fastq.gz, … Sequence168.fastq.gz

I can’t seem to figure out how to tell it to run all 168 files, 9 at a time. I can get it to run all 168 files, but it uses all the available nodes, which will get me in trouble since this is going to run for a few days.

I’ve found where I should be able to use “--array=1-168%9” to specify how many to run at once, but this was implemented in a newer version of slurm than we have on our cluster. Is there an alternate way to get this functionality? I've been trying things and pulling my hair out for a couple of weeks.

The way I’m trying to run it is:

#!/bin/bash
#SBATCH --job-name=McSeqs
#SBATCH --nodes=1
#SBATCH --array=1-168
srun alignmentProgramHere Sequence${SLURM_ARRAY_TASK_ID}.fastq.gz -o outputdirectory/

Thanks! Matt

回答1:

So I figured out a way to make it work I think. The trick has been that the sbatch options all get passed to each array instance. I used the --exclude option to tell each array instance not to use half of the compute nodes. So now I'm running 9 of my files at once, leaving compute nodes open for other people.

#!/bin/bash
#SBATCH --job-name=McSeqs
#SBATCH --nodes=1
#SBATCH --array=1-168
#SBATCH --exclude=cluster[10-20]

srun alignmentProgramHere Sequence${SLURM_ARRAY_TASK_ID}.fastq.gz -o outputdirectory/


标签: hpc slurm sbatch