I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk
", an "appointment event id"
t1_appnt_evnt_id
and another column "t1_prnt_t1_pk
" which may or may
not contain another row's t1_pk
.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id
has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I had similar error with Hibernate. And flushing session by using
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
With any trigger-based (or application code-based) solution you need to put in locking to prevent data corruption in a multi-user environment. Even if your trigger worked, or was re-written to avoid the mutating table issue, it would not prevent 2 users from simultaneously updating t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not null: assume there are currenly no rows where t1_appnt_evnt_id=123 and t1_prnt_t1_pk is not null:
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I think I disagree with your description of what the trigger is trying to do. It looks to me like it is meant to enforce this business rule: For a given value of t1_appnt_event, only one row can have a non-NULL value of t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
If these assumptions are true, you could create a function like this:
and an index like this:
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments: