Following statement:
INSERT INTO dbo.Changes([Content], [Date], [UserId], [CompanyId])
VALUES (@1, @2, @3, @4);
SELECT @@identity;
gives me this SQL error 3960:
Snapshot isolation transaction aborted due to update conflict. You
cannot use snapshot isolation to access table 'dbo.Companies' directly
or indirectly in database 'myDatabase' to update, delete, or insert
the row that has been modified or deleted by another transaction.
Retry the transaction or change the isolation level for the
update/delete statement.
As far as I understood, from the error message, I should not update, delete, or insert to table dbo.Companies
during the time another connection is modifying dbo.Companies
.
But why it occurs when I was inserting a new row to another table dbo.Changes
(which has foreign key to dbo.Companies
) and I was not deleting the referenced row in dbo.Companies
, but I was just updating row in dbo.Companies
and not the primary key? This should work ok, shouldn't it? (Is it a bug in SQL Server?)
UPDATE:
Tables looks like following:
dbo.Changes([Id] int PK, [Content] nvarchar,
[Date] datetime, [UserId] int, [CompanyId] int -> dbo.Companies.[Id])
dbo.Companies([Id] int PK, [Name] nvarchar)
Second update is doing:
UPDATE dbo.Companies WHERE [Id] = @1 SET [Name] = @2;
It appears SQL Server will acquire update locks on any record it has to read even if it doesn't modify it.
More info on this microsoft.public.sqlserver.server thread:
Without a supporting index on CustomerContactPerson, the statement
DELETE FROM ContactPerson WHERE ID = @ID;
Will require a "current"
read of all the rows in CustomerContactPerson to ensure that there
are no CustomerContactPerson rows that refer to the deleted
ContactPerson row. With the index, the DELETE can determine that
there are no related rows in CustomerContactPerson without reading the
rows affected by the other transaction.
Additionally, in a snapshot
transaction the pattern for reading data which you are going to turn
around and update is to take an UPDLOCK when you read. This ensures
that you are making your update on the basis of "current" data, not
"consistent" (snapshot) data, and that when you issue the DML, it the
data won't be locked, and you won't unwittingly overwrite another
session's change.
The fix for us was adding indexes to the foreign keys
In your example, I suspect adding an index to Changes.CompanyId will help. I'm not sure if this is a real solution. Can the SQL Server optimizer choose not to use the index?
SQL Server can see an update to a dependent table which COULD modify the behavior of the insert ... seems fair to me as SQL can not guess what other logic might be dependent on the [name] column (triggers etc.)
if your applications implement deadlock retry logic you can modify them to treat error no 3960 the same as error no 1205 and automatically retry ...