I have the following UPSERT in PostgreSQL 9.5:
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id;
If there are no conflicts it returns something like this:
----------
| id |
----------
1 | 50 |
----------
2 | 51 |
----------
But if there are conflicts it doesn't return any rows:
----------
| id |
----------
I want to return the new id
columns if there are no conflicts or return the existing id
columns of the conflicting columns.
Can this be done? If so, how?
Upsert, being an extension of the
INSERT
query can be defined with two different behaviors in case of a constraint conflict:DO NOTHING
orDO UPDATE
.Note as well that
RETURNING
returns nothing, because no tuples have been inserted. Now withDO UPDATE
, it is possible to perform operations on the tuple there is a conflict with. First note that it is important to define a constraint which will be used to define that there is a conflict.I had exactly the same problem, and I solved it using 'do update' instead of 'do nothing', even though I had nothing to update. In your case it would be something like this:
This query will return all the rows, regardless they have just been inserted or they existed before.
The currently accepted answer seems ok for few conflicts, small tuples and no triggers. And it avoids concurrency issue 1 with brute force (see below). The simple solution has its appeal, the side effects may be less important.
For all other cases, though, do not update identical rows without need. Even if you see no difference on the surface, there are various side effects:
It might fire triggers that should not be fired.
It write-locks "innocent" rows, possibly incurring costs for concurrent transactions.
It might make the row seem new, though it's old (transaction timestamp).
Most importantly, with PostgreSQL's MVCC model a new row version is written either way, no matter whether the row data is the same. This incurs a performance penalty for the UPSERT itself, table bloat, index bloat, performance penalty for all subsequent operations on the table,
VACUUM
cost. A minor effect for few duplicates, but massive for mostly dupes.You can achieve (almost) the same without empty updates and side effects.
Without concurrent write load
The
source
column is an optional addition to demonstrate how this works. You may actually need it to tell the difference between both cases (another advantage over empty writes).The final
JOIN chats
works because newly inserted rows from an attached data-modifying CTE are not yet visible in the underlying table. (All parts of the same SQL statement see the same snapshots of underlying tables.)Since the
VALUES
expression is free-standing (not directly attached to anINSERT
) Postgres cannot derive data types from the target columns and you may have to add explicit type casts. The manual:The query itself may be a bit more expensive for few dupes, due to the overhead of the CTE and the additional
SELECT
(which should be cheap since the perfect index is there by definition - a unique constraint is implemented with an index).May be (much) faster for many duplicates. The effective cost of additional writes depends on many factors.
But there are fewer side effects and hidden costs in any case. It's most probably cheaper overall.
(Attached sequences are still advanced, since default values are filled in before testing for conflicts.)
About CTEs:
With concurrent write load
Assuming default
READ COMMITTED
transaction isolation.Related answer on dba.SE with detailed explanation:
The best strategy to defend against race conditions depends on exact requirements, the number and size of rows in the table and in the UPSERTs, the number of concurrent transactions, the likelihood of conflicts, available resources and other factors ...
Concurrency issue 1
If a concurrent transaction has written to a row which your transaction now tries to UPSERT, your transaction has to wait for the other one to finish.
If the other transaction ends with
ROLLBACK
(or any error, i.e. automaticROLLBACK
), your transaction can proceed normally. Minor side effect: gaps in the sequential numbers. But no missing rows.If the other transaction ends normally (implicit or explicit
COMMIT
), yourINSERT
will detect a conflict (theUNIQUE
index / constraint is absolute) andDO NOTHING
, hence also not return the row. (Also cannot lock the row as demonstrated in concurrency issue 2 below, since it's not visible.) TheSELECT
sees the same snapshot from the start of the query and also cannot return the yet invisible row.Any such rows are missing from the result set (even though they exist in the underlying table)!
This may be ok as is. Especially if you are not returning rows like in the example and are satisfied knowing the row is there. If that's not good enough, there are various ways around it.
You could check the row count of the output and repeat the statement if it does not match the row count of the input. May be good enough for the rare case. The point is to start a new query (can be in the same transaction), which will then see the newly committed rows.
Or check for missing result rows within the same query and overwrite those with the brute force trick demonstrated in Alextoni's answer.
It's like the query above, but we add one more step with the CTE
ups
, before we return the complete result set. That last CTE will do nothing most of the time. Only if rows go missing from the returned result, we use brute force.More overhead, yet. The more conflicts with pre-existing rows, the more likely this will outperform the simple approach.
One side effect: the 2nd UPSERT writes rows out of order, so it re-introduces the possibility of deadlocks (see below) if three or more transactions writing to the same rows overlap. If that's a problem, you need a different solution.
Concurrency issue 2
If concurrent transactions can write to involved columns of affected rows, and you have to make sure the rows you found are still there at a later stage in the same transaction, you can lock rows cheaply with:
And add a locking clause to the
SELECT
as well, likeFOR UPDATE
.This makes competing write operations wait till the end of the transaction, when all locks are released. So be brief.
More details and explanation:
Deadlocks?
Defend against deadlocks by inserting rows in consistent order. See:
Data types and casts
Existing table as template for data types ...
Explicit type casts for the first row of data in the free-standing
VALUES
expression may be inconvenient. There are ways around it. You can use any existing relation (table, view, ...) as row template. The target table is the obvious choice for the use case. Input data is coerced to appropriate types automatically, like in aVALUES
clause of anINSERT
:This does not work for some data types (explanation in the linked answer at the bottom). The next trick works for all data types:
... and names
If you insert whole rows (all columns of the table - or at least a set of leading columns), you can omit column names, too. Assuming table
chats
in the example only has the 3 columns used:Detailed explanation and more alternatives:
Aside: don't use reserved words like
"user"
as identifier. That's a loaded footgun. Use legal, lower-case, unquoted identifiers. I replaced it withusr
.