I'm developing a facebook canvas application and I want to load-test it. I'm aware of the facebook restriction on automated testing, so I simulated the graph api calls by creating a fake web application served under nginx and altering my /etc/hosts to point graph.facebook.com to 127.0.0.1.
I'm using jmeter to load-test the application and the simulation is working ok. Now I want to simulate slow graph api responses and see how they affect my application. How can I configure nginx so that it inserts a delay to each request sent to the simulated graph.facebook.com application?
You can slow the speed of localhost (network) by adding delay.
Use ifconfig
command to see network device: on localhost it may be lo
and on LAN its eth0
.
to add delay use this command (adding 1000ms delay on lo
network device)
tc qdisc add dev lo root netem delay 1000ms
to change delay use this one
tc qdisc change dev lo root netem delay 1ms
and to remove delay
tc qdisc del dev lo root netem delay 1000ms
My earlier answer works but it is more adapted to a case where all requests need to be slowed down. I've since had to come up with a solution that would allow me to turn on the rate limit only on a case-by-case basis, and came up with the following configuration. Make sure to read the entire answer before you use this, because there are important nuances to know.
location / {
if (-f somewhere/sensible/LIMIT) {
echo_sleep 1;
# Yes, we need this here too.
echo_exec /proxy$request_uri;
}
echo_exec /proxy$request_uri;
}
location /proxy/ {
internal;
# Ultimately, all this goes to a Django server.
proxy_pass http://django/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Important note: the presence or absence of forward slashes in the various paths makes a difference. For instance, proxy_pass http://django
, without a trailing slash, does not do the same thing as the line in the code above.
The principle of operation is simple. If the file somewhere/sensible/LIMIT
exists, then requests that match location /
are paused for one second before moving on. So in my test suite, when I want a network slowdown, I create the file, and when I want to remove the slowdown, I remove it. (And I have cleanup code that removes it between each test.) In theory I'd much prefer using variables for this than a file, but the problem is that variables are reinitialized with each request. So we cannot have a location
block that would set a variable to turn the limit, and another to turn it off. (That's the first thing I tried, and it failed due to the lifetime of variables). It would probably be possible to use the Perl module or Lua to persist variables or fiddle with cookies, but I've decided not to go down these routes.
Important notes:
It is not a good idea to mix directives from the echo
module (like echo_sleep
and echo_exec
) with the stock directives of nginx that result in the production of a response. I initially had echo_sleep
together with proxy_pass
and got bad results. That's why we have the location /proxy/
block that segregates the stock directives from the echo
stuff. (See this issue for a similar conflict that was resolved by splitting a block.)
The two echo_exec
directives, inside and outside the if
, are necessary due to how if
works.
The internal
directive prevents clients from directly requesting /proxy/...
URLs.
I've modified a nginx config to use limit_req_zone
and limit_req
to introduce delays. The following reduces the rate of service to 20 requests per second (rate=20r/s
). I've set burst=1000
so that my application would not get 503 responses.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=20r/s;
[...]
server {
[...]
location / {
limit_req zone=one burst=1000;
[...]
}
}
}
The documentation is here. I do not believe there is a way to specify a uniform delay using this method.