Deadlocks - Will this really help?

2019-06-19 01:13发布

问题:

So I've got a query that keeps deadlocking on me. People who know the system well can't figure out why the sproc is deadlocking, but they tell me that I should just add this to it:

SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

Is this really a valid solution? What does that do?

回答1:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

This will cause the system to return inconsitent data, including duplicate records and missing records. Read more at Previously committed rows might be missed if NOLOCK hint is used, or here at Timebomb - The Consistency problem with NOLOCK / READ UNCOMMITTED.

Deadlocks can be investigated and fixed, is not a big deal if you follow the proper procedure. Of course, throwing a dirty read may seem easier, but down the road you'll be sitting long hours staring at your general ledger and wondering why the heck it does not balance debits and credits. So read again until you really grok this: DIRTY READs ARE INCONSISTENT READS.

If you want a get-out-of-jail card, turn on snapshot isolation:

ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON

But keep in mind that snapshot isolation does not fix the deadlocks, it only hides them. Proper investigation of the deadlock cause and fix is always the appropriate action.



回答2:

NOCOUNT will keep your query from returning rowcounts back to the calling application (i.e. 1000000 rows affected).

TRANSACTION ISOLATION LEVEL READ UNCOMMITTED will allow for dirty reads as indicated here.

The isolation level may help, but do you want to allow dirty reads?



回答3:

Randomly adding SET options to the query is unlikely to help I'm afraid

SET NOCOUNT ON

Will have no effect on the issue.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

will prevent your query taking out shared locks. As well as reading "dirty" data it also can lead to your query reading the same rows twice, or not at all, dependant upon what other concurrent activity is happening.

Whether this will resolve your deadlock issue depends upon the type of deadlock. It will have no effect at all if the issue is 2 writers deadlocking due to non linear ordering of lock requests. (transaction 1 updating row a, transaction 2 updating row b then tran 1 requesting a lock on b and tran 2 requesting a lock on a)

Can you post the offending query and deadlock graph? (if you are on SQL 2005 or later)



回答4:

The best guide is:

http://technet.microsoft.com/es-es/library/ms173763.aspx

Snippet:

Specifies that statements can read rows that have been modified by other transactions but not yet committed.

Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked by exclusive locks that would prevent the current transaction from reading rows that have been modified but not committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the isolation levels.

In SQL Server, you can also minimize locking contention while protecting transactions from dirty reads of uncommitted data modifications using either:

The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set to ON. The SNAPSHOT isolation level

.



回答5:

On a different tack, there are two other aspects to consider, that may help.

1) Indexes and the indexes used by the SQL. The indexing strategy used on the tables will affect how many rows are affected. If you make the data modifications using a unique index, you may reduce the chance of deadlocks.

One algorithm - of course it will not work it all cases. The use of NOLOCK is targeted rather than being global.

The "old" way:
UPDATE dbo.change_table
   SET somecol = newval
 WHERE non_unique_value = 'something'

The "new" way:
INSERT INTO #temp_table
    SELECT uid FROM dbo.change_table WITH (NOLOCK)
     WHERE non_unique_value = 'something'

UPDATE dbo.change_table
   SET somecol = newval
  FROM dbo.change_table c
       INNER JOIN
       #temp_table t
       ON (c.uid = t.uid)

2) Transaction duration The longer a transaction is open the more likely there may be contention. If there is a way to reduce the amount of time that records remain locked, you can reduce the chances of a deadlock occurring. For example, perform as many SELECT statements (e.g. lookups) at the start of the code instead of performing an INSERT or UPDATE, then a lookup, then an INSERT, and then another lookup. This is where one can use the NOLOCK hint for SELECTs on "static" tables that are not changing reducing the lock "footprint" of the code.