Let's say we have table Sales with 30 columns and 500,000 rows. I would like to delete 400,000 in the table (those where "toDelete='1'"
).
But I have a few constraints :
- the table is read / written "often" and I would not like a long "delete" to take a long time and lock the table for too long
- I need to skip the transaction log (like with a
TRUNCATE
) but while doing a"DELETE ... WHERE..."
(I need to put a condition), but haven't found any way to do this...
Any advice would be welcome to transform a
DELETE FROM Sales WHERE toDelete='1'
to something more partitioned & possibly transaction log free.
You should try to give it a
ROWLOCK
hint so it will not lock the entire table. However, if you delete a lot of rows lock escalation will occur.Also, make sure you have a non-clustered filtered index (only for 1 values) on the
toDelete
column. If possible make it a bit column, not varchar (or what it is now).Ultimately, you can try to iterate over the table and delete in chunks.
Updated
Since while loops and chunk deletes are the new pink here, I'll throw in my version too (combined with my previous answer):
My own take on this functionality would be as follows. This way there is no repeated code and you can manage your chunk size.
Calling
DELETE FROM TableName
will do the entire delete in one large transaction. This is expensive.Here is another option which will delete rows in batches :
As I assume the best way to delete huge amount of records is to delete it by
Primary Key
. (What isPrimary Key
see here)So you have to generate tsql script that contains the whole list of lines to delete and after this execute this script.
For example code below is gonna generate that file
The ouput file is gonna have records like
And now you have to use
SQLCMD
utility in order to execute this script.You can find this approach explaned here https://www.mssqltips.com/sqlservertip/3566/deleting-historical-data-from-a-large-highly-concurrent-sql-server-database-table/
One way I have had to do this in the past is to have a stored procedure or script that deletes n records. Repeat until done.
What you want is batch processing.
Of course you can experiment which is the best value to use for the batch, I've used from 500 - 50000 depending on the table. If you use cascade delete, you will probably need a smaller number as you have those child records to delete.