Why increasing worker_connections in Nginx makes t

2019-04-11 06:03发布

I'm transforming my application to node.js cluster which I hope it would boost the performance of my application.

Currently, I'm deploying the application to 2 EC2 t2.medium instances. I have Nginx as a proxy and ELB.

This is my express cluster application which is pretty standard from the documentation.

var bodyParser = require('body-parser');
var cors = require('cors');
var cluster = require('cluster');
var debug = require('debug')('expressapp');

if(cluster.isMaster) {
  var numWorkers = require('os').cpus().length;
  debug('Master cluster setting up ' + numWorkers + ' workers');

  for(var i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function(worker) {
    debug('Worker ' + worker.process.pid + ' is online');
  });

  cluster.on('exit', function(worker, code, signal) {
    debug('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal);
    debug('Starting a new worker');
    cluster.fork();  
  });
} else {
  // Express stuff
}

This is my Nginx configuration.

nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '1024'
nginx::keepalive_timeout: '65'

I have 2 CPUs on Nginx server.

This is my before performance.

enter image description here

I get 1,500 request/s which is pretty good. Now I thought I would increase the number of connections on Nginx so I can accept more requests. I do this.

nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '2048'
nginx::keepalive_timeout: '65'

And this is my after performance.

enter image description here

Which I think it's worse than before.

I use gatling for performance testing and here's the code.

import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class LoadTestSparrowCapture extends Simulation {
  val httpConf = http
    .baseURL("http://ELB")
    .acceptHeader("application/json")
    .doNotTrackHeader("1")
    .acceptLanguageHeader("en-US,en;q=0.5")
    .acceptEncodingHeader("gzip, defalt")
    .userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0")

    val headers_10 = Map("Content-Type" -> "application/json")

    val scn = scenario("Load Test")
      .exec(http("request_1")
        .get("/track"))

    setUp(
      scn.inject(
        atOnceUsers(15000)
      ).protocols(httpConf))
}

I deployed this to my gatling cluster. So, I have 3 EC2 instances firing 15,000 requests in 30s to my application.

The question is, is there anything I can do to increase my performance of my application or I just need to add more machines?

The route that I'm testing is pretty simple, I get the request and send it off to RabbitMQ so it can be processed further. So, the response of that route is pretty fast.

1条回答
劳资没心,怎么记你
2楼-- · 2019-04-11 06:59

You've mentioned that you are using AWS and in the front of your EC2 instances in ELB. As I see you are getting 502 and 503 status codes. These can be sent from ELB or your EC2 instances. Make sure that when doing the load-test you know from where the errors are coming from. You can check this in AWS console in ELB CloudWatch metrics.

Basically HTTPCode_ELB_5XX means your ELB sent 50x. On other hand HTTPCode_Backend_5XX sent 50x. You can also verify that in the logs of ELB. Better explanation of errors of ELB you can find here.

To load-test on AWS you should definitely read this. Point is that ELB is just another set of machines, which needs to scale if your load increases. Default scaling strategy is (cited from the section "Ramping Up Testing"):

Once you have a testing tool in place, you will need to define the growth in the load. We recommend that you increase the load at a rate of no more than 50 percent every five minutes.

That means when you start at some number of concurrent users, lets say 1000, per default you should increase only up to 1500 within 5 minutes. This will guarantee that ELB will scale with load on your servers. Exact numbers may vary and you have to test them on your own. Last time I've tested it sustained load of 1200 req./s w/o an issue and then I've started to receive 50x. You can test it easily running ramp-up scenario from X to Y users from single client and waiting for 50x.

Next very important thing (from part "DNS Resoultion") is:

If clients do not re-resolve the DNS at least once per minute, then the new resources Elastic Load Balancing adds to DNS will not be used by clients.

In short it means that you have to guarantee that TTL in DNS is respected, or that your clients re-resolve and rotate DNS IPs which they received by doing DNS lookup to guarantee round-robin fashion to distributing load. If not (e.g. testing from only one client, not your case) you can skew the results by overloading one instance of ELB by targeting all the traffic only to one instance. That means ELB will not scale at all.

Hope it will help.

查看更多
登录 后发表回答