Possible to implement a manual increment with just

2019-02-13 22:58发布

问题:

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).

Here is what I have tried thus far:

BEGIN TRAN

INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');

COMMIT TRAN;

* Data abstracted to use generic names and identifiers

However, when executed, the command errors, saying that

"Subqueries are not allowed in this context. only scalar expressions are allowed"

So, how can I do this/what am I doing wrong?


EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.

回答1:

You understand that you will have collisions right?

you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here

DECLARE @id int
BEGIN TRAN

    SELECT @id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
    INSERT INTO Table1(id, data_field)
    VALUES (@id ,'[blob of data]')
COMMIT TRAN

To explain the collision thing, I have provided some code

first create this table and insert one row

CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go

Now open up two query windows and run this at the same time

declare @i int
set @i =1
while @i < 10000
begin
BEGIN TRAN

INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1

COMMIT TRAN;
set @i =@i + 1
end

You will see a bunch of these

Server: Msg 2627, Level 14, State 1, Line 7 Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'. The statement has been terminated.



回答2:

Try this instead:

INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl

I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)



回答3:

I don't know if somebody is still looking for an answer but here is a solution that seems to work:

-- Preparation: execute only once
    CREATE TABLE Test (Value int)

CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()

-- Real insert

    BEGIN TRAN LockTran

    -- Lock an object to block simultaneous calls.
    UPDATE  Lock WITH(TABLOCK)
    SET     LockID = LockID

    INSERT INTO Test
    SELECT ISNULL(MAX(T.Value), 0) + 1
    FROM Test T

    COMMIT TRAN LockTran


回答4:

We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)

We created a separate table for last number used and seeded it with 0.

Our insert takes a few steps.

--increment the number Update dbo.NumberTable set number = number + 1

--find out what the incremented number is select @number = number from dbo.NumberTable

--use the number insert into dbo.MyTable using the @number

commit or rollback

This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.

Hope that helps.

Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.



回答5:

It could be because there are no records so the sub query is returning NULL...try

INSERT INTO tblTest(RecordID, Text) 
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')


回答6:

If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:

DECLARE @next INT
SET @next = (SELECT (MAX(id) + 1) FROM Table1)

INSERT INTO Table1
VALUES (@next, inserted.datablob)

The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for @next, causing a conflict.

Does this accomplish what you want?



回答7:

This should work:

INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';

Or this (substitute LIMIT for other platforms):

INSERT INTO Table1 (id, data_field)
SELECT TOP 1
    MAX(id) + 1, '[blob of data]'
FROM
   Table1
ORDER BY
   [id] DESC;


回答8:

declare @nextId int
set @nextId = (select MAX(id)+1 from Table1)

insert into Table1(id, data_field) values (@nextId, '[blob of data]')

commit;

But perhaps a better approach would be using a scalar function getNextId('table1')



回答9:

It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)

Having said that, some options are:

  • using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
  • you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
  • use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
  • Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.

Sorry about rambling. Hope that helps.



回答10:

Any critiques of this? Works for me.

DECLARE @m_NewRequestID INT
        , @m_IsError BIT = 1
        , @m_CatchEndless INT = 0

WHILE @m_IsError = 1
    BEGIN TRY
        SELECT  @m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)

        INSERT INTO Requests (  RequestID
                                , RequestName
                                , Customer
                                , Comment
                                , CreatedFromApplication)
            SELECT  RequestID = @m_NewRequestID
                    , RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
                    , Customer = @Customer
                    , Comment = [Description]
                    , CreatedFromApplication = @CreatedFromApplication
                FROM    RequestPatterns
                WHERE   PatternID = @PatternID

        SET @m_IsError = 0
    END TRY
    BEGIN CATCH
        SET @m_IsError = 1
        SET @m_CatchEndless = @m_CatchEndless + 1
        IF @m_CatchEndless > 1000
            THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
    END CATCH


回答11:

How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.

And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.