We have an application, which will delete a row from a table based on user requests. I cannot change the application code. However, I want to insert a row into another table (kinda like a journal log) with information from a few other tables based on information of the row that is being deleted.
How do I achieve this within PostgreSQL?
Write a trigger function. Something like this:
And a trigger
ON DELETE
. Like this:Key elements
Best make it a trigger
AFTER DELETE
andFOR EACH ROW
.To return all columns from the old table use the syntax
(OLD).*
. See the manual about accessing composite types. AlternativelyOLD.*
is valid syntax, too, becauseOLD
is added to theFROM
clause implicitly. For aVALUES
expression it would have to be(OLD).*
, though. Like:You can include values from any other table like I demonstrate. Just make sure to get a single row, or you create multiple entries.
As the trigger fires
AFTER
the event, the function canRETURN NULL
.About visibility
In response to @couling's watchful comment.
While foreign keys can be declared as
DEFERRED
, this will only defer the integrity check, not the deletion itself. Rows that are deleted in triggers executed before the one at hand or byON DELETE CASCADE
foreign keys will not be visible any more at the time thisAFTER DELETE
trigger is called. (It all happens in one transaction obviously. None of these details matter for other transactions, which will see all or none of the effects. Refer to the manual for more about the MVCC model and transaction isolation.)Therefore, if you want to include values from rows depending in such a way in your
INSERT
, be sure to call this trigger before those rows get deleted.You may have to you make this trigger
BEFORE DELETE
.Or it can mean that you have to order your triggers accordingly,
BEFORE
triggers come beforeAFTER
triggers, obviously. And triggers at the same level are executed in alphabetical order.However, as long as I am super precise here, I might also add that changes made to the row (or depending rows) in other
BEFORE
triggers are also only visible if those are called before this one.My advice to make it an
AFTER
trigger was because it is less prone to complications and cheaper if other trigger might cancel (roll back) theDELETE
half way through the operation - as long as none of the above applies.You may want to use functions that I wrote for keeping historical data. Short decription:
Historical data are kept in separate schema named audit. So first step would be creating this schema:
In audit schema one can find exact copy of tables from public, that are created dynamically when first change in data in public schema occurs. So, before first use of database audit schema remains empty until user will do his first insert into one of the tables.
Function _audit_table_creator(name) is copying then structure of the table from public schema and creates the same table in audit schema with some additional columns, that I called ‘audit stamp’. Audit stamp keeps the informations about:
I think the biggest advantage of this solution is that composite primary keys are supported (function _where_clause_creator(text[]) creates proper where clause for table called by trigger by concatenating strings in the right order);
Viewing historical records:
Everytime we want to retrieve archival data, we have to use aliases, i.e. to retrieve historical data about user whos user_id = 5 one have to write:
So the same queries can be used in both schemas but to retrieve historical data one have to add ‘audit.’ before table name.
You may want to create delete triggers automatically for all tables in database at once, if you do you can just do the query:
The main function:
trigger:
other functions used: