I need to do a Postgres update on a collection of records & I'm trying to prevent a deadlock which appeared in the stress tests.
The typical resolution to this is to update records in a certain order, by ID for example - but it seems that Postgres doesn't allow ORDER BY for UPDATE.
Assuming I need to do an update, for example:
UPDATE BALANCES WHERE ID IN (SELECT ID FROM some_function() ORDER BY ID);
results in deadlocks when you run 200 queries concurrently. What to do?
I'm looking for a general solution, not case-specific workarounds like in UPDATE with ORDER BY
It feels that there must be a better solution than writing a cursor function. Also, if there's no better way, how would that cursor function optimally look like? Update record-by-record
In general, concurrency is difficult. Especially with 200 statements (i'm assuming you don't only query = SELECT) or even transactions (actually every single statement issued is wrapped into a transaction if it's not in a transaction already).
The general solution concepts are (a combination of) these:
To be aware that deadlocks can happen, catch them in the application, check the Error Codes for
class 40
or40P01
and retry the transaction.Reserve locks. Use
SELECT ... FOR UPDATE
. Evade explicit locks as long as possible. Locks will force other transactions to wait for lock release, which harms concurrency, but can prevent transactions running into deadlocks. Check the example for deadlocks in chapter 13. Especially the one in which transaction A waits for B and B waits for A (the bank account thingy).Choose a different Isolation Level, for example a weaker one like
READ COMMITED
, if possible. Be aware ofLOST UPDATE
s inREAD COMMITED
mode. Prevent them withREPEATABLE READ
.Write your statements with locks in the same order in EVERY transaction, for example by table name alphabetically.
with the general locking order
A B C D
. This way, the transactions can interleave in any relative order and still have a good chance not to deadlock (depending on your statements you may have other serialization issues though). The statements of the transactions will run in the order specified by them, but it can be that transaction 1 runs their first 2, then xact 2 runs the first one, then 1 finishes and finally xact 2 finishes.Also, you should realise that a statement involving multiple rows is not executed atomically in a concurrent situation. In other words, if you have two statements A and B involving multiple rows, then they can be executed in this order:
but NOT as a block of a's followed by b's. The same applies to a statement with a sub-query. Have you looked at the query plans using
EXPLAIN
?In your case, you can try
If possible by what you want to do, you can also use SELECT ... FOR UPDATE SKIP LOCK, which will skip already locked data to get back concurrency, which is lost by WAITing for another transaction to release a lock (FOR UPDATE). But this will not apply an UPDATE to locked rows, which your application logic might require. So run that later on (see point 1).
Also read LOST UPDATE about the
LOST UPDATE
and SKIP LOCKED aboutSKIP LOCKED
. A queue might be an idea in your case, which is explained perfectly in theSKIP LOCKED
reference, although relational DBMS are not meant to be queues.HTH
As far as I know, there's no way to accomplish this directly through the
UPDATE
statement; the only way to guarantee lock order is to explicitly acquire locks with aSELECT ... ORDER BY ID FOR UPDATE
, e.g.:This has the downside of repeating the
ID
index lookup on theBalances
table. In your simple example, you can avoid this overhead by fetching the physical row address (represented by thectid
system column) during the locking query, and using that to drive theUPDATE
:(Be careful when using
ctid
s, as the values are transient. We're safe here, as the locks will block any changes.)Unfortunately, the planner will only utilise the
ctid
in a narrow set of cases (you can tell if it's working by looking for a "Tid Scan" node in theEXPLAIN
output). To handle more complicated queries within a singleUPDATE
statement, e.g. if your new balance was being returned bysome_function()
alongside the ID, you'll need to fall back to the ID-based lookup:If the performance overhead is an issue, you'd need to resort to using a cursor, which would look something like this: