What is the right way to deal with Mongodb connect

2019-01-13 10:55发布

问题:

I try node.js with mongodb (2.2.2) together using the native node.js drive by 10gen.

At first everything went well. But when coming to the concurrency benchmarking part, a lot of errors occured. Frequent connect/close with 1000 concurrencies may cause mongodb reject any further requests with error like:

Error: failed to connect to [localhost:27017]

Error: Could not locate any valid servers in initial seed list

Error: no primary server found in set

Also, if a lot of clients shutdown without explicit close, it'll take mongodb minutes to detect and close them. Which will also cause similar connection problems. (Using /var/log/mongodb/mongodb.log to check the connection status)

I have tried a lot. According to the manual, mongodb don't have connection limitation, but poolSize option seems to have no effects to me.

As I have only worked with it in node-mongodb-native module, I'm not very sure what eventually caused the problem. What about the performance in other other languages and drivers?

PS: Currently, using self maintained pool is the only solution I figured out, but using it can not can not solve the problem with replica set. According to my test, replica set seems take much less connections then standalone mongodb. But have no idea why this happens.

Concurrency test code:

var MongoClient = require('mongodb').MongoClient;

var uri = "mongodb://192.168.0.123:27017,192.168.0.124:27017/test";

for (var i = 0; i < 1000; i++) {
    MongoClient.connect(uri, {
        server: {
            socketOptions: {
                connectTimeoutMS: 3000
            }
        },
    }, function (err, db) {
        if (err) {
            console.log('error: ', err);
        } else {
            var col = db.collection('test');
            col.insert({abc:1}, function (err, result) {
                if (err) {
                    console.log('insert error: ', err);
                } else {
                    console.log('success: ', result);
                }
                db.close()
            })
        }
    })
}

Generic-pool solution:

var MongoClient = require('mongodb').MongoClient;
var poolModule = require('generic-pool');

var uri = "mongodb://localhost/test";

var read_pool = poolModule.Pool({
    name     : 'redis_offer_payment_reader',
    create   : function(callback) {
        MongoClient.connect(uri, {}, function (err, db) {
            if (err) {
                callback(err);
            } else {
                callback(null, db);
            }
        });
    },
    destroy  : function(client) { client.close(); },
    max      : 400,
    // optional. if you set this, make sure to drain() (see step 3)
    min      : 200, 
    // specifies how long a resource can stay idle in pool before being removed
    idleTimeoutMillis : 30000,
    // if true, logs via console.log - can also be a function
    log : false 
});


var size = [];
for (var i = 0; i < 100000; i++) {
    size.push(i);
}

size.forEach(function () {
    read_pool.acquire(function (err, db) {
        if (err) {
            console.log('error: ', err);
        } else {
            var col = db.collection('test');
            col.insert({abc:1}, function (err, result) {
                if (err) {
                    console.log('insert error: ', err);
                } else {
                    //console.log('success: ', result);
                }
                read_pool.release(db);
            })
        }
    })
})

回答1:

Since Node.js is single threaded you shouldn't be opening and closing the connection on each request (like you would do in other multi-threaded environments.)

This is a quote from the person that wrote the MongoDB node.js client module:

“You open do MongoClient.connect once when your app boots up and reuse the db object. It's not a singleton connection pool each .connect creates a new connection pool. So open it once an[d] reuse across all requests.” - christkv https://groups.google.com/forum/#!msg/node-mongodb-native/mSGnnuG8C1o/Hiaqvdu1bWoJ



回答2:

After looking into Hector's advise. I find Mongodb's connection is quite different from some other databases I ever used. The main difference is native drive in nodejs:MongoClient has it's own connection pool for each MongoClient opened, which pool size is defined by

server:{poolSize: n}

So, open 5 MongoClient connection with poolSize:100, means total 5*100=500 connections to the target Mongodb Uri. In this case, frequent open&close MongoClient connections will definately be a huge burden to the host and finally cause connection problems. That's why I got so much trouble in the first place.

But as my code has been writen into that way, so I use a connection pool to store a single connection with each distinct URI, and use a simple parallel limiter same size as the poolSize, to avoid load peak get connection errors.

Here is my code:

/*npm modules start*/
var MongoClient = require('mongodb').MongoClient;
/*npm modules end*/

// simple resouce limitation module, control parallel size
var simple_limit = require('simple_limit').simple_limit; 

// one uri, one connection
var client_pool = {};

var default_options = {
    server: {
        auto_reconnect:true, poolSize: 200,
        socketOptions: {
            connectTimeoutMS: 1000
        }
    }
}

var mongodb_pool = function (uri, options) {
    this.uri = uri;
    options = options || default_options;
    this.options = options;
    this.poolSize = 10; // default poolSize 10, this will be used in generic pool as max

    if (undefined !== options.server && undefined !== options.server.poolSize) {
        this.poolSize = options.server.poolSize;// if (in)options defined poolSize, use it
    }
}

// cb(err, db)
mongodb_pool.prototype.open = function (cb) {
    var self = this;
    if (undefined === client_pool[this.uri]) {
        console.log('new');

        // init pool node with lock and wait list with current callback
        client_pool[this.uri] = {
            lock: true,
            wait: [cb]
        }

        // open mongodb first
        MongoClient.connect(this.uri, this.options, function (err, db) {
            if (err) {
                cb(err);
            } else {
                client_pool[self.uri].limiter = new simple_limit(self.poolSize);
                client_pool[self.uri].db = db;

                client_pool[self.uri].wait.forEach(function (callback) {
                    client_pool[self.uri].limiter.acquire(function () {
                        callback(null, client_pool[self.uri].db)
                    });
                })

                client_pool[self.uri].lock = false;
            }
        })
    } else if (true === client_pool[this.uri].lock) {
        // while one is connecting to the target uri, just wait
        client_pool[this.uri].wait.push(cb);
    } else {
        client_pool[this.uri].limiter.acquire(function () {
            cb(null, client_pool[self.uri].db)
        });
    }
}

// use close to release one connection
mongodb_pool.prototype.close = function () {
    client_pool[this.uri].limiter.release();
}

exports.mongodb_pool = mongodb_pool;