Let's say there are two python scripts that want to write data to the same table which is stored in an SQLite file using the sqlite3
module. The SQLite-file is stored on an NFS filesystem. In the SQLite-FAQ I read:
SQLite uses reader/writer locks to control access to the database. [...] But use caution: this locking mechanism might not work correctly if the database file is kept on an NFS filesystem. This is because fcntl() file locking is broken on many NFS implementations. You should avoid putting SQLite database files on NFS if multiple processes might try to access the file at the same time.
Does this mean it is not possible at all or is there some way to ensure that one process waits until the other is done?
The INSERTs are not complex. Just some:
INSERT_STATEMENT = "INSERT INTO some_table (row, col, val) VALUES (?, ?, ?)"
connection.executemany(INSERT_STATEMENT, triples)
And the inserted sets are disjoint.
A further question: Does the NFS-Problems occure when two processes try to write to the same table or when they try to write to the same database (which is a file)? Would it be a workaround to let each process create its own table in the same database (file) and write to that?