Is T-SQL Stored Procedure Execution 'atomic

2019-02-16 08:37发布

问题:

Let's say I have a simple stored procedure that looks like this (note: this is just an example, not a practical procedure):

CREATE PROCEDURE incrementCounter AS

DECLARE @current int
SET @current = (select CounterColumn from MyTable) + 1

UPDATE
    MyTable
SET
    CounterColumn = current
GO

We're assuming I have a table called 'myTable' that contains one row, with the 'CounterColumn' containing our current count.

Can this stored procedure be executed multiple times, at the same time?

i.e. is this possible:

I call 'incrementCounter' twice. Call A gets to the point where it sets the 'current' variable (let's say it is 5). Call B gets to the point where it sets the 'current' variable (which would also be 5). Call A finishes executing, then Call B finishes. In the end, the table should contain the value of 6, but instead contains 5 due to the overlap of execution

回答1:

This is for SQL Server.

Each statement is atomic, but if you want the stored procedure to be atomic (or any sequence of statements in general), you need to explicitly surround the statements with

BEGIN TRANSACTION
Statement ...
Statement ...
COMMIT TRANSACTION

(It's common to use BEGIN TRAN and END TRAN for short.)

Of course there are lots of ways to get into lock trouble depending what else is going on at the same time, so you may need a strategy for dealing with failed transactions. (A complete discussion of all the circumstances that might result in locks, no matter how you contrive this particular SP, is beyond the scope of the question.) But they will still be resubmittable because of the atomicity. And in my experience you'll probably be fine, without knowing about your transaction volumes and the other activities on the database. Excuse me for stating the obvious.

Contrary to a popular misconception, this will work in your case with default transaction level settings.



回答2:

In addition to placing the code between a BEGIN TRANSACTION and END TRANSACTION, you'd need to ensure that your transaction isolation level is set correctly.

For example, SERIALIZABLE isolation level will prevent lost updates when the code runs concurrently, but READ COMMITTED (the default in SQL Server Management Studio) will not.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

As others have already mentioned, whilst ensuring consistency, this can cause blocking and deadlocks and so may not be the best solution in practice.



回答3:

I use this method

CREATE PROCEDURE incrementCounter
AS

DECLARE @current int

UPDATE MyTable
SET
  @current = CounterColumn = CounterColumn + 1

Return @current

this procedure do all two command at one time and it is isolate from other transaction.



回答4:

Maybe I'm reading too much into your example (and your real situation may be significantly more complicated), but why wouldn't you just do this in a single statement?

CREATE PROCEDURE incrementCounter AS

UPDATE
    MyTable
SET
    CounterColumn = CounterColumn + 1

GO

That way, it's automatically atomic and if two updates are executued at the same time, they'll always be ordered by SQL Server so as to avoid the conflict you describe. If, however, your real situation is much more complicated, then wrapping it in a transaction is the best way to do this.

However, if another process has enabled a "less safe" isolation level (like one that allows dirty reads or non-repeatable reads), then I don't think a transaction will protect against this, as another process can see into the partially updated data if it's elected to allow unsafe reads.



回答5:

Short answer to your question is YES it can and will come up short. If you want to block concurrent execution of stored procedures start a transaction and update the same data in every execution of the stored procedure before continuing to do any work within the procedure.

CREATE PROCEDURE ..
BEGIN TRANSACTION
UPDATE mylock SET ref = ref + 1
...

This will force other concurrent executions to wait their turn since they will not be able to change 'ref' value until the other transaction(s) complete and associated update lock is lifted.

In general it is a good idea to assume result of any and all SELECT queries are stale before they are ever even executed. Using "heavy" isolation levels to workaround this unfortunate reality severely limits scalability. Much better to structure changes in a way which make optimistic assumptions about state of system you expect to exist during the update so when your assumption fail you can try again later and hope for a better outcome. For example:

UPDATE
    MyTable
SET
    CounterColumn = current 
WHERE CounterColumn = current - 1

Using your example with added WHERE clause this update does not affect any rows if assumption about its current state fails. Check @@ROWCOUNT to test number of rows and rollback or some other action as appropriate while it differs from expected outcome.