mongo 3 duplicates on unique index - dropDups

2019-01-22 04:05发布

问题:

In the documentation for mongoDB it says: "Changed in version 3.0: The dropDups option is no longer available."

Is there anything I can do (other than downgrading) if I actually want to create a unique index and destroy duplicate entries?

please keep in mind the I receive about 300 inserts per second so I can't just delete all duplicates and hope none will come in by the time I'm done indexing.

回答1:

Yes dropDupes is now deprecated since version 2.7.5 because it was not possible to predict correctly which document would be deleted in the process.

Typically, you have 2 options :

  1. Use a new collection :

    • Create a new collection,
    • Create the unique index on this new collection,
    • Run a batch to copy all the documents from the old collection to the new one and make sure you ignore duplicated key error during the process.
  2. Deal with it in your own collection manually :

    • make sure you won't insert more duplicated documents in your code,
    • run a batch on your collection to delete the duplicates (and make sure you keep the good one if they are not completely identical),
    • then add the unique index.

For your particular case, I would recommend the first option but with a trick :

  • Create a new collection with unique index,
  • Update your code so you now insert documents in both tables,
  • Run a batch to copy all documents from the old collection to the new one (ignore duplicated key error),
  • rename the new collection to match the old name.
  • re-update your code so you now write only in the "old" collection


回答2:

As highlighted by @Maxime-Beugnet you can create a batch script to remove duplicates from a collection. I have included my approach below that is relatively fast if the number of duplicates are small in comparison to the collection size. For demonstration purposes this script will de-duplicate the collection created by the following script:

db.numbers.drop()

var counter = 0
while (counter<=100000){
  db.numbers.save({"value":counter})
  db.numbers.save({"value":counter})
  if (counter % 2 ==0){
    db.numbers.save({"value":counter})
  }
  counter = counter + 1;
}

You can remove the duplicates in this collection by writing an aggregate query that returns all records with more than one duplicate.

var cur = db.numbers.aggregate([{ $group: { _id: { value: "$value" }, uniqueIds: { $addToSet: "$_id" }, count: { $sum: 1 } } }, { $match: { count: { $gt: 1 } } }]);

Using the cursor you can then iterate over the duplicate records and implement your own business logic to decide which of the duplicates to remove. In the example below I am simply keeping the first occurrence:

while (cur.hasNext()) {
    var doc = cur.next();
    var index = 1;
    while (index < doc.uniqueIds.length) {
        db.numbers.remove(doc.uniqueIds[index]);
        index = index + 1;
    }
}

After removal of the duplicates you can add an unique index:

db.numbers.createIndex( {"value":1},{unique:true})


回答3:

pip install mongo_remove_duplicate_indexes

best way will be to create a python script or in any language you prefer,iterate the collection ,create new collection with a unique index set to true with db.collectionname.createIndex({'indexname':1},unique:true),and insert your documents from previous collection to new collection and since key you wanted to be distinct or duplicates removed will not be inserted in ur new collection and u can handle the ecxeption easily with exception handling

check out the package source code for the example