What is the best way to delete all of a large tabl

2019-04-19 09:29发布

问题:

We've run across a slightly odd situation. Basically there are two tables in one of our databases that are fed tons and tons of logging info we don't need or care about. Partially because of this we're running out of disk space.

I'm trying to clean out the tables, but it's taking forever (there are still 57,000,000+ records after letting this run through the weekend... and that's just the first table!)

Just using delete table is taking forever and eats up drive space (I believe because of the transaction log.) Right now I'm using a while loop to delete records X at a time, while playing around with X to determine what's actually fastest. For instance X=1000 takes 3 seconds, while X=100,000 takes 26 seconds... which doing the math is slightly faster.

But the question is whether or not there is a better way?

(Once this is done, going to run a SQL Agent job go clean the table out once a day... but need it cleared out first.)

回答1:

TRUNCATE the table or disable indexes before deleting

TRUNCATE TABLE [tablename]

Truncating will remove all records from the table without logging each deletion separately.



回答2:

To add to the other responses, if you want to hold onto the past day's data (or past month or year or whatever), then save that off, do the TRUNCATE TABLE, then insert it back into the original table:

SELECT
     *
INTO
     tmp_My_Table
FROM
     My_Table
WHERE
     <Some_Criteria>

TRUNCATE TABLE My_Table

INSERT INTO My_Table SELECT * FROM tmp_My_Table

The next thing to do is ask yourself why you're inserting all of this information into a log if no one cares about it. If you really don't need it at all then turn off the logging at the source.



回答3:

1) Truncate table

2) script out the table, drop and recreate the table



回答4:

TRUNCATE TABLE [tablename]

will delete all the records without logging.



回答5:

Depending on how much you want to keep, you could just copy the records you want to a temp table, truncate the log table, and copy the temp table records back to the log table.



回答6:

If you can work out the optimum x this will constantly loop around the delete at the quickest rate. Setting the rowcount limits the number of records that will get deleted in each step of the loop. If the logfile is getting too big; stick a counter in the loop and truncate every million rows or so.

set @@rowcount x while 1=1 Begin

delete from table If @@Rowcount = 0 break

End

Change the logging mode on the db to simple or bulk logged will reduce some of the delete overhead.



回答7:

check this

  1. article from MSDN Delete_a_Huge_Amount_of_Data_from
  2. Information on Recovery Models
  3. and View or Change the Recovery Model of a Database