I have a Node.js application that stores some configuration data in a file. If you change some settings, the configuration file is written to disk.
At the moment, I am using a simple fs.writeFile
.
Now my question is: What happens when Node.js crashes while the file is being written? Is there the chance to have a corrupt file on disk? Or does Node.js guarantee that the file is written in an atomic way, so that either the old or the new version is valid?
If not, how could I implement such a guarantee? Are there any modules for this?
fs.writeFile, just like all the other methods in the
fs
module are implemented as simple wrappers around standard POSIX functions (as stated in the docs).Digging a bit in nodejs' code, one can see that the fs.js, where all the wrappers are defined, uses fs.c for all its file system calls. More specifically, the
write
method is used to write the contents of the buffer. It turns out that the POSIX specification for write explicitly says that:So it seems it is pretty safe to write, as long as the size of the buffer is smaller than PIPE_BUF. This is a constant that is system-dependent though, so you might need to check it somewhere else.
Node implements only a (thin) async wrapper over system calls, thus it does not provide any guarantees about atomicity of writes. In fact,
fs.writeAll
repeatedly callsfs.write
until all data is written. You are right that when Node.js crashes, you may end up with a corrupted file.The simplest solution I can come up with is the one used e.g. for FTP uploads:
The man page says that rename guarantees to leave an instance of newpath in place (on Unix systems like Linux or OSX).
write-file-atomic will do what you need. It writes to temporary file, then rename. That's safe.