Go HTTP server testing ab vs wrk so much differenc

2020-06-04 09:31发布

I am trying to see how many requests the go HTTP server can handle on my machine so I try to do some test but the difference is so large that I am confused.

First I try to bench with ab and run this command

$ ab -n 100000 -c 1000 http://127.0.0.1/

Doing 1000 concurrent requests.

The result is as follows:

Concurrency Level:      1000
Time taken for tests:   12.055 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      12800000 bytes
HTML transferred:       1100000 bytes
Requests per second:    8295.15 [#/sec] (mean)
Time per request:       120.552 [ms] (mean)
Time per request:       0.121 [ms] (mean, across all concurrent requests)
Transfer rate:          1036.89 [Kbytes/sec] received

8295 requests per second which seems reasonable.

But then I try to run it on wrk with this command:

$ wrk -t1 -c1000 -d5s http://127.0.0.1:80/

And I get these results:

Running 5s test @ http://127.0.0.1:80/
  1 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.92ms   13.38ms 234.65ms   94.89%
    Req/Sec    27.03k     1.43k   29.73k    63.27%
  136475 requests in 5.10s, 16.66MB read
Requests/sec:  26767.50
Transfer/sec:      3.27MB

26767 requests per second? I don't understand why there is such a huge difference.

The code run was the simplest Go server

package main

import (
    "net/http"
)

func main() {

    http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
        w.Write([]byte("Hello World"))
    })

    http.ListenAndServe(":80", nil)
}

My goal is to see how many requests the go server can handle as I increase the cores, but this is just too much of a difference before I even start adding more CPU power. Does anyone know how the Go server scales when adding more cores? And also why the huge difference between ab and wrk?

1条回答
乱世女痞
2楼-- · 2020-06-04 10:03

Firstly: benchmarks are often pretty artificial. Sending back a handful of bytes is going to net you very different results once you start adding database calls, template rendering, session parsing, etc. (expect an order of magnitude difference)

Then tack on local issues - open file/socket limits on your dev machine vs. production, competition between your benchmarking tool (ab/wrk) and your Go server for those resources, the local loopback adapter or OS TCP stacks (and TCP stack tuning), etc. It goes on!

In addition:

  • ab is not highly regarded
  • It is HTTP/1.0 only, and therefore doesn't do keepalives
  • Your other metrics vary wildly - e.g. look at the avg latency reported by each tool - ab has a much higher latency
  • Your ab test also runs for 12s and not the 5s your wrk test does.
  • Even 8k req/s is a huge amount of load - that's 28 million requests an hour. Even if—after making a DB call, marshalling a JSON struct, etc—that went down to 3k/req/s you'd still be able to handle significant amounts of load. Don't get too tied up in these kind of benchmarks this early.

I have no idea what kind of machine you're on, but my iMac with a 3.5GHz i7-4771 can push upwards of 64k req/s on a single thread responding with a w.Write([]byte("Hello World\n"))

Short answer: use wrk and keep in mind that benchmarking tools have a lot of variance.

查看更多
登录 后发表回答