Does SQLite3 safely handle concurrent access by multiple processes reading/writing from the same DB? Are there any platform exceptions to that?
相关问题
- sqlite3 锁库的问题 (mount、Linux)
- An error occurred while installing sqlite3 (1.3.11
- Simulate ORDER BY in SQLite UPDATE to handle uniqu
- SQLite SharedCache MultiThread Reads
- how to add column in a sqlite table in SWIFT
相关文章
- 蟒3.2 UnicodeEncodeError:“字符表”编解码器不能在9629位置编码字符“\\
- 如何为Ruby在Windows上安装sqlite3的?(How do I install sqlit
- 力的Python放弃本地的sqlite3和使用(已安装)最新版本的sqlite3的(Force Py
- 导入CSV对于SQLite(Import CSV to SQLite)
- 在iOS应用使用数据(有什么选择呢?NSData的,CoreData,sqlite的,plist中,
- sqlite3 锁库的问题 (mount、Linux)
- An error occurred while installing sqlite3 (1.3.11
- Simulate ORDER BY in SQLite UPDATE to handle uniqu
This thread is old but i think it would be good to share result of my tests done on sqlite: i ran 2 instances of python program (different processes same program) executing statements SELECT and UPDATE sql commands within transaction with EXCLUSIVE lock and timeout set to 10 seconds to get a lock, and result were frustrating. Every instance did in 10000 step loop:
Even if sqlite granted exclusive lock on transaction, the total number of really executed cycles were not equal to 20 000 but less (total number of iterations over single counter counted for both processes). Python program almost did not throw any single exception (only once during select for 20 executions). sqlite revision at moment of test was 3.6.20 and python v3.3 CentOS 6.5. In mine opinion it is better to find more reliable product for this kind of job or restrict writes to sqlite to single unique process/thread.
Nobody seems to have mentioned WAL (Write Ahead Log) mode. Make sure the transactions are properly organised and with WAL mode set on, there is no need to keep the database locked whilst people are reading things whilst an update is going on.
The only issue is that at some point the WAL needs to be re-incorporated into the main database, and it does this when the last connection to the database closes. With a very busy site you might find it take a few seconds for all connections to be close, but 100K hits per day should not be a problem.
Yes it does. Lets figure out why
SQLite is transactional
Such ACID support as well as concurrent read/writes are provided in 2 ways - using the so-called journaling (lets call it “old way”) or write-ahead logging (lets call it “new way”)
Journaling (Old Way)
In this mode SQLite uses DATABASE-LEVEL locking. This is the crucial point to understand.
That means whenever it needs to read/write something it first acquires a lock on the ENTIRE database file. Multiple readers can co-exist and read something in parallel
During writing it makes sure an exclusive lock is acquired and no other process is reading/writing simultaneously and hence writes are safe.
This is why here they’re saying SQlite implements serializable transactions
Troubles
As it needs to lock an entire database every time and everybody waits for a process handling writing concurrency suffers and such concurrent writes/reads are of fairly low performance
Rollbacks/outages
Prior to writing something to the database file SQLite would first save the chunk to be changed in a temporary file. If something crashes in the middle of writing into the database file it would pick up this temporary file and revert the changes from it
Write-Ahead Logging or WAL (New Way)
In this case all writes are appended to a temporary file (write-ahead log) and this file is periodically merged with the original database. When SQLite is searching for something it would first check this temporary file and if nothing is found proceed with the main database file.
As a result, readers don’t compete with writers and performance is much better compared to the Old Way.
Caveats
SQlite heavily depends on the underlying filesystem locking functionality so it should be used with caution, more details here
You're also likely to run into the database is locked error, especially in the journaled mode so your app needs to be designed with this error in mind
If most of those concurrent accesses are reads (e.g. SELECT), SQLite can handle them very well. But if you start writing concurrently, lock contention could become an issue. A lot would then depend on how fast your filesystem is, since the SQLite engine itself is extremely fast and has many clever optimizations to minimize contention. Especially SQLite 3.
For most desktop/laptop/tablet/phone applications, SQLite is fast enough as there's not enough concurrency. (Firefox uses SQLite extensively for bookmarks, history, etc.)
For server applications, somebody some time ago said that anything less than 100K page views a day could be handled perfectly by a SQLite database in typical scenarios (e.g. blogs, forums), and I have yet to see any evidence to the contrary. In fact, with modern disks and processors, 95% of web sites and web services would work just fine with SQLite.
If you want really fast read/write access, use an in-memory SQLite database. RAM is several orders of magnitude faster than disk.
It's clear in the DOC.
In transaction processing, SQLite implements independent transaction processing through the exclusive and shared locks on the database level. And this is why multiple processes can read data from the same database at the same time, but only one can write to the database.
An exclusive lock must be obtained before a process or thread wants to perform a write operation on a database.After the exclusive lock is obtained, other read or write operations will not occur again.
Implement details for instance two write:
SQLite has a lock table to help different write databases can be locked at the last moment to ensure maximum concurrency.
The initial state is "UNLOCKED", and in this state, the connection has not access the database yet. When a database is connected to a database and even a transaction has been started with BEGIN, the connection is still in "UNLOCKED" state.
The next state of the unlocked state is a SHARED state. In order to be able to read (not write) data from the database, the connection must first enter the SHARED state, that is to say, first to get a SHARED lock. Multiple connections can obtain and maintain SHARED locks at the same time, that is, multiple connections can read data from the same database at the same time. But even if only one SHARED lock has not been released, it does not allow any connection to write a database.
If a connection wants to write a database, it must first get a RESERVED lock.
Once a connection obtains a RESERVED lock, it can start processing database modification operations, though these modifications can only be done in the buffer, rather than actually written to disk. The modifications made to the readout content are saved in the memory buffer. When a connection wants to submit a modification (or transaction), it is necessary to upgrade the reserved lock to an exclusive lock. In order to get the lock, you must first lift the lock to a pending lock.
So SQLite safely handle concurrent access by multiple processes writing from the same db because it doesn't support it.you will get
SQLITE_BUSY
orSQLITE_LOCKED
for the second writer when it hit the retry limitation.Yes, SQLite handles concurrency well, but it isn't the best from a performance angle. From what I can tell, there are no exceptions to that. The details are on SQLite's site: https://www.sqlite.org/lockingv3.html
This statement is of interest: "The pager module makes sure changes happen all at once, that either all changes occur or none of them do, that two or more processes do not try to access the database in incompatible ways at the same time"