SQL Batched Delete

2019-04-04 23:50发布

I have a table in SQL Server 2005 which has approx 4 billion rows in it. I need to delete approximately 2 billion of these rows. If I try and do it in a single transaction, the transaction log fills up and it fails. I don't have any extra space to make the transaction log bigger. I assume the best way forward is to batch up the delete statements (in batches of ~ 10,000?).

I can probably do this using a cursor, but is the a standard/easy/clever way of doing this?

P.S. This table does not have an identity column as a PK. The PK is made up of an integer foreign key and a date.

9条回答
欢心
2楼-- · 2019-04-05 00:19

In addition to putting this in a batch with a statement to truncate the log, you also might want to try these tricks:

  • Add criteria that matches the first column in your clustered index in addition to your other criteria
  • Drop any indexes from the table and then put them back after the delete is done if that's possible and won't interfere with anything else going on in the DB, but KEEP the clustered index

For the first point above, for example, if your PK is clustered then find a range which approximately matches the number of rows that you want to delete each batch and use that:

DECLARE @max_id INT, @start_id INT, @end_id INT, @interval INT
SELECT @start_id = MIN(id), @max_id = MAX(id) FROM My_Table
SET @interval = 100000  -- You need to determine the right number here
SET @end_id = @start_id + @interval

WHILE (@start_id <= @max_id)
BEGIN
     DELETE FROM My_Table WHERE id BETWEEN @start_id AND @end_id AND <your criteria>

     SET @start_id = @end_id + 1
     SET @end_id = @end_id + @interval
END
查看更多
一纸荒年 Trace。
3楼-- · 2019-04-05 00:20

Here is my example:

-- configure script
-- Script limits - transaction per commit (default 10,000)
-- And time to allow script to run (in seconds, default 2 hours)
--
DECLARE @MAX INT
DECLARE @MAXT INT
--
-- These 4 variables are substituted by shell script.
--
SET @MAX = $MAX
SET @MAXT = $MAXT
SET @TABLE = $TABLE
SET @WHERE = $WHERE

-- step 1 - Main loop
DECLARE @continue INT
-- deleted in one transaction
DECLARE @deleted INT
-- deleted total in script
DECLARE @total INT
SET @total = 0
DECLARE @max_id INT, @start_id INT, @end_id INT, @interval INT
SET @interval = @MAX
SELECT @start_id = MIN(id), @max_id = MAX(id) from @TABLE
SET @end_id = @start_id + @interval

-- timing
DECLARE @start DATETIME
DECLARE @now DATETIME
DECLARE @timee INT
SET @start = GETDATE()
-- 
SET @continue = 1
IF OBJECT_ID (N'EntryID', 'U') IS NULL 
BEGIN
    CREATE TABLE EntryID (startid INT)
    INSERT INTO EntryID(startid) VALUES(@start_id)
END
    ELSE
BEGIN
    SELECT @start_id = startid FROM EntryID
END


WHILE (@continue = 1 AND @start_id <= @max_id)
BEGIN

    PRINT 'Start issued:   ' + CONVERT(varchar(19), GETDATE(), 120)
    BEGIN TRANSACTION
        DELETE 
        FROM @TABLE
        WHERE id BETWEEN @start_id AND @end_id AND @WHERE
        SET @deleted = @@ROWCOUNT
    UPDATE EntryID SET EntryID.startid = @end_id + 1
    COMMIT
    PRINT 'Deleted issued: ' + STR(@deleted) + ' records. ' + CONVERT(varchar(19), GETDATE(), 120) 
    SET @total = @total + @deleted
    SET @start_id = @end_id + 1
    SET @end_id = @end_id + @interval
    IF @end_id > @max_id
        SET @end_id = @max_id

    SET @now = GETDATE()
    SET @timee = DATEDIFF (second, @start, @now)
    if @timee > @MAXT
    BEGIN
    PRINT 'Time limit exceeded for the script, exiting'
    SET @continue = 0
    END
--    ELSE
--    BEGIN
--      SELECT @total 'Removed now', @timee 'Total time, seconds'   
--    END
END

SELECT @total 'Removed records', @timee 'Total time sec' , @start_id 'Next id', @max_id 'Max id', @continue 'COMPLETED? '
SELECT * from EntryID next_start_id

GO
查看更多
不美不萌又怎样
4楼-- · 2019-04-05 00:21

Well, if you were using SQL Server Partitioning, say based on the date column, you would have possibly switched out the partitions that are no longer required. A consideration for a future implementation perhaps.

I think the best option may be as you say, to delete the data in smaller batches, rather than in one hit, so as to avoid any potential blocking issues.

You could also consider the following method:

  1. Copy the data to keep into a temporary table
  2. Truncate the original table to purge all data
  3. Move everything from the temporary table back into the original table

Your indexes would also be rebuilt as the data was added back to the original table.

查看更多
登录 后发表回答