I am trying to see how many requests the go HTTP server can handle on my machine so I try to do some test but the difference is so large that I am confused.
First I try to bench with ab and run this command
$ ab -n 100000 -c 1000 http://127.0.0.1/
Doing 1000 concurrent requests.
The result is as follows:
Concurrency Level: 1000
Time taken for tests: 12.055 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Total transferred: 12800000 bytes
HTML transferred: 1100000 bytes
Requests per second: 8295.15 [#/sec] (mean)
Time per request: 120.552 [ms] (mean)
Time per request: 0.121 [ms] (mean, across all concurrent requests)
Transfer rate: 1036.89 [Kbytes/sec] received
8295 requests per second which seems reasonable.
But then I try to run it on wrk with this command:
$ wrk -t1 -c1000 -d5s http://127.0.0.1:80/
And I get these results:
Running 5s test @ http://127.0.0.1:80/
1 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 18.92ms 13.38ms 234.65ms 94.89%
Req/Sec 27.03k 1.43k 29.73k 63.27%
136475 requests in 5.10s, 16.66MB read
Requests/sec: 26767.50
Transfer/sec: 3.27MB
26767 requests per second? I don't understand why there is such a huge difference.
The code run was the simplest Go server
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("Hello World"))
})
http.ListenAndServe(":80", nil)
}
My goal is to see how many requests the go server can handle as I increase the cores, but this is just too much of a difference before I even start adding more CPU power. Does anyone know how the Go server scales when adding more cores? And also why the huge difference between ab and wrk?