We have a trigger that creates audit records for a table and joins the inserted
and deleted
tables to see if any columns have changed. The join has been working well for small sets, but now I'm updating about 1 million rows and it doesn't finish in days. I tried updating a select number of rows with different orders of magnitude and it's obvious this is exponential, which would make sense if the inserted
/deleted
tables are being scanned to do the join.
I tried creating an index but get the error:
Cannot find the object "inserted" because it does not exist or you do not have permissions.
Is there any way to make this any faster?
Inserting into temporary tables indexed on the joining columns could well improve things as inserted
and deleted
are not indexed.
You can check @@ROWCOUNT
inside the trigger so you only perform this logic above some threshold number of rows though on SQL Server 2008 this might overstate the number somewhat if the trigger was fired as the result of a MERGE
statement (It will return the total number of rows affected by all MERGE
actions not just the one relevant to that specific trigger).
In that case you can just do something like SELECT @NumRows = COUNT(*) FROM (SELECT TOP 10 * FROM INSERTED) T
to see if the threshold is met.
Addition
One other possibility you could experiment with is simply bypassing the trigger for these large updates. You could use SET CONTEXT_INFO
to set a flag and check the value of this inside the trigger. You could then use OUTPUT inserted.*, deleted.*
to get the "before" and "after" values for a row without needing to JOIN
at all.
DECLARE @TriggerFlag varbinary(128)
SET @TriggerFlag = CAST('Disabled' AS varbinary(128))
SET CONTEXT_INFO @TriggerFlag
UPDATE YourTable
SET Bar = 'X'
OUTPUT inserted.*, deleted.* INTO @T
/*Reset the flag*/
SET CONTEXT_INFO 0x