I have a collection with 9 million records. I am currently using the following script to update the entire collection:
simple_update.js
db.mydata.find().forEach(function(data) {
db.mydata.update({_id:data._id},{$set:{pid:(2571 - data.Y + (data.X * 2572))}});
});
This is run from the command line as follows:
mongo my_test simple_update.js
So all I am doing is adding a new field pid based upon a simple calculation.
Is there a faster way? This takes a significant amount of time.
There are two things that you can do.
- Send an update with the 'multi' flag set to true.
- Store the function server-side and try using server-side code execution.
That link also contains the following advice:
This is a good technique for performing batch administrative work. Run mongo on the server, connecting via the localhost interface. The connection is then very fast and low latency. This is friendlier than db.eval() as db.eval() blocks other operations.
This is probably the fastest you'll get. You have to realize that issuing 9M updates on a single server is going to be a heavy operation. Let's say that you could get 3k updates / second, you're still talking about running for nearly an hour.
And that's not really a "mongo problem", that's going to be a hardware limitation.
I am using the: db.collection.update method
// db.collection.update( criteria, objNew, upsert, multi ) // --> for reference
db.collection.update( { "_id" : { $exists : true } }, objNew, upsert, true);
I won't recommend using {multi: true} for a larger data set, because it uses lot of CPU and less configurable.
As a mongodb enthusiast, I hate when people say mongo is slow. So I found a better way using bulk insert.
Bulk operation is really helpful for scheduler tasks. Say you have to delete data older that 6 months daily. Use bulk operation. Its fast and won't slow down server. The CPU, memory usage is not noticeable when you do insert, delete or update over a billion documents. {multi:true} slow down the server when you are dealing with million+ documents.
See a sample below. It's a js shell script, can run it in server as a node program as well.(use npm module shelljs or similar to achieve this)
update monogo to 3.2+
The old way of doing update...
let counter = 0;
db.myCol.find({}).sort({$natural:1}).limit(1000000).forEach(function(document){
counter++;
document.test_value = "just testing" + counter
db.myCol.save(document)
});
It took 310-315seconds when I tried. Thats more than 5 minutes for updating a million documents.
My collection includes 100million+ documents, so speed may differ for others.
The same using bulk insert is
let counter = 0;
// magic no.- depends on your hardware and document size. - my document size is around 1.5kb-2kb
// performance reduces when this limit is not in 1500-2500 range.
// try different range and find fastest bulk limit for your document size or take an average.
let limitNo = 2222;
let bulk = db.myCol.initializeUnorderedBulkOp();
let noOfDocsToProcess = 1000000;
db.myCol.find({}).sort({$natural:1}).limit(noOfDocsToProcess).forEach(function(document){
counter++;
noOfDocsToProcess --;
limitNo--;
bulk.find({_id:document._id}).update({$set:{test_value : "just testing .. " + counter}});
if(limitNo === 0 || noOfDocsToProcess === 0){
bulk.execute();
bulk = db.myCol.initializeUnorderedBulkOp();
limitNo = 2222;
}
});
The best time was 8972 millis. So in average it took only 10 seconds to update a million documents. 30 times faster than old way.
Put the code in a .js file and execute as mongo shell script.
If someone found a better way, please update. Lets use mongo in a faster way.
Not sure if it will be any faster but you could do a multi-update. Just say update where _id > 0
(this will be true for every object) and then set the 'multi' flag to true and it should do the same without having to iterate through the entire collection.
Check this out:
MongoDB - Server Side Code Execution