I have a script to find duplicate rows in my MySql table, the table contains 40,000,000 rows. but it is very slow going, is there an easier way to find the duplicate records without going in and out of php?
This is the script i currently use
$find = mysql_query("SELECT * FROM pst_nw ID < '1000'");
while ($row = mysql_fetch_assoc($find))
{
$find_1 = mysql_query("SELECT * FROM pst_nw add1 = '$row[add1]' AND add2 = '$row[add2]' AND add3 = '$row[add3]' AND add4 = '$row[add4]'");
if (mysql_num_rows($find_1) > 0) {
mysql_query("DELETE FROM pst_nw WHERE ID ='$row[ID]'}
}
You have a number of options.
Let the DB do the work
Create a copy of your table with a unique index - and then insert the data into it from your source table:
The advantage of doing things this way is you can verify that your new table is correct before dropping your source table. The disadvantage is it takes up twice as much space and is (relatively) slow to execute.
Let the DB do the work #2
You can also achieve the result you want by doing:
The first command is required as a workaround for the ignore flag being .. ignored
The advantage here is there's no messing about with a temporary table - the disadvantage is you don't get to check that your update does exactly what you expect before you run it.
Example:
Don't do this kind of thing outside the DB
Especially with 40 million rows doing something like this outside the db is likely to take a huge amount of time, and may not complete at all. Any solution that stays in the db will be faster, and more robust.
Sure there is. Note however that with 40 million records You most probably will exceed max php execution time. Try following
Confirm that everything is ok first!!
Your code will be better if you don't use
select *
, only select columns (4 address) you want to compare. It should have limit clause in my sql. It can avoid state not response when you have too large nums rows like that.Usually in questions like this the problem is "I have duplicate rows, want to keep only one row, any one".
But judging from the code, what you want is: "if a set of add1, add2, add3, add4 is duplicated, DELETE ALL COPIES WITH ID < 1000". In this case, copying from the table to another with INSERT IGNORE won't do what you want - might even keep rows with lower IDs and discard subsequent ones.
I believe you need to run something like this to gather all the "bad IDs" (IDs with a duplicate, the duplicate above 1000; in this code I used "AND bad.ID < good.ID", so if you have ID 777 which duplicates to ID 888, ID 777 will still get deleted. If this is not what you want, you can modify that in "AND bad.ID < 1000 AND good.ID > 1000" or something like that).
Then once you have all bad IDs into a table,
Performances will greatly benefit from a (non_unique, possibly only temporary) index on add1, add2, add3, add4 and ID in this order.
Try creating a new table that has the same definitions. i.e. "my_table_two", then do:
Maybe that'll sort it out.
Get the duplicate rows using "Group by" operator. Here is a sample that you can try :
So, you are getting all the duplicate ids. Now delete them using a delete query. Instead of using "IN", use "OR" operator as "IN" is slow compared to "OR".