I have scripts that make hundreds of quick succession, small, temp files needing to be created and very soon read back in, then unlinked.
My testing shows little if any performance difference by putting said files in /tmp
(to disk) or into /dev/shm
(filesystem-level shared memory) on Linux even under moderate load. I attribute this to the filesystem cache.
Granted the disk will eventually get hit with the fileystem actions, but on multiple small write-read temp files, why would you (not) recommend /dev/shm
over disk-backed directory? Have you noticed big performance increases with shared memory directory over a cached VFS?
It is essentially the same (shm is also backed implicitly by the disk when you have a swapfile).
/tmp has the advantage that it fills up harder (considering your hard disk is likely larger than your swapfile). And also it is more widely supported.
/dev/shm
is intended for a very special purpose, not for files to be put to by arbitrary programs.In contrast,
/tmp
is exactly made for this. On my systems,/tmp
is atmpfs
as well, in contrast to/var/tmp
which is designed for putting larger files, potentially staying longer.