I'm working on a python script that will be accessed via the web, so there will be multiple users trying to append to the same file at the same time. My worry is that this might cause a race condition where if multiple users wrote to the same file at the same time and it just might corrupt the file.
For example:
#!/usr/bin/env python
g = open("/somepath/somefile.txt", "a")
new_entry = "foobar"
g.write(new_entry)
g.close
Will I have to use a lockfile for this as this operation looks risky.
You can use file locking:
import fcntl
new_entry = "foobar"
with open("/somepath/somefile.txt", "a") as g:
fcntl.flock(g, fcntl.LOCK_EX)
g.write(new_entry)
fcntl.flock(g, fcntl.LOCK_UN)
Note that on some systems, locking is not needed if you're only writing small buffers, because appends on these systems are atomic.
Depending on your platform/filesystem location this may not be doable in a safe manner (e.g. NFS). Perhaps you can write to different files and merge the results afterwards?
If you are doing this operation on Linux, and the cache size is smaller than 4KB, the write operation is atomic and you should be good.
More to read here: Is file append atomic in UNIX?
You didnt state what platform you use, but here is an module you can use that is cross platform:
File locking in Python