I observe that when I run a SLURM job, it could create files on other folder paths and also could remove them. It seems dangerous that via SLURM job they can access others folders/files and make changes on them.
$ sbatch run.sh
run.sh:
#!/bin/bash
#SBATCH -o slurm.out # STDOUT
#SBATCH -e slurm.err # STDERR
echo hello > /home/avatar/completed.txt
rm /home/avatar/completed.txt
[Q] Is it possible to force SLURM to only have access to its own running folder and not others?
Files access is controlled through UNIX permissions, so a job can only write where the submitting user has permission to write. And most often, a job will need to read and write from and to several distinct directories on distinct filesystems (home NFS for configuration files and results, scratch parallel filesystem for intermediary data and input data, etc.) so Slurm should not confine the job in the submission directory.
If you want to make sure your job has no way to write outside of a specific directory, you can use the
chroot
command in your job submission script, but that seems a bit odd and less easy to manage than UNIX permissions.