Slurm Multiprocessing Python Job

2019-04-11 12:25发布

问题:

I have a 4 node Slurm cluster, each with 6 cores. I would like to submit a test Python script (it spawns processes that print the hostname of the node it's being run on) utilizing Multiprocessing as follows:

def print_something():
  print gethostname()

# number of processes allowed to run on the cluster at a given time
n_procs = int(environ['SLURM_JOB_CPUS_PER_NODE']) * int(environ['SLURM_JOB_NUM_NODES'])
# tell Python how many processes can run at a time
pool = Pool(n_procs)
# spawn an arbitrary number of processes
for i in range(200):
    pool.apply_async(print_something)
pool.close()
pool.join()

I submit this with an SBATCH script, which specifies nodes=4 and ntasks-per-node=6, but I am finding that the Python script gets executed 4*6 times. I just want the job to execute the script once, and allow Slurm to distribute the process spawns across the cluster.

I'm obviously not understanding something here...?

回答1:

Ok, I figured it out.

I needed to have a better understanding of the relationship between SBATCH and SRUN. Mainly, SBATCH may act as a global job container for SRUN invocations.

The biggest factor here was changing from Python Multiprocessing to Subprocess. This way, the SBATCH can execute a python script, which in turn can dynamically invoke SRUN subprocesses of another python script, and allocate cluster resources appropriately.