I am trying to run a webservice API with ServiceStack under nginx and fastcgi-mono-server.
The server starts fine and the API is up and running. I can see the response times in the browser through ServiceStack profiler and they run under 10ms.
But as soon as I do a small load test using "siege" (only 500 requests using 10 connections), I start getting 502 Bad Gateway. And to recover, I have to restart the fastcgi-mono-server.
The nginx server is fine. The fastcgi-mono-server is the one that stops responding after this small load.
I've tried using both tcp and unix sockets (I am aware of a permissions problem with the unix socket, but I already fixed that).
Here are my configurations:
server {
listen 80;
listen local-api.acme.com:80;
server_name local-api.acme.com;
location / {
root /Users/admin/dev/acme/Acme.Api/;
index index.html index.htm default.aspx Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass unix:/tmp/fastcgi.socket;
include /usr/local/etc/nginx/fastcgi_params;
}
}
To start the fastcgi-mono-server:
sudo fastcgi-mono-server4 /applications=local-api.acme.com:/:/Users/admin/dev/acme/Acme.Api/ /socket=unix:/tmp/fastcgi.socket /multiplex=True /verbose=True /printlog=True
EDIT:
I forgot to mention an important detail: I am running this on Mac OS X.
I also tested all the possible web server configuration for Mono: console application, apache mod_mono, nginx fast_cgi and proxy_pass modules. All presented the same problem of crashing after a few requests under Mono 3.2.3 + Mac OS X.
I was able to test the same configuration on a Linux machine and didn't have any problems there.
So it seems it is a Mono/ASP.NET issue when running on Mac OS X.
EDIT:
I do see in the original question that there was no problems running under Linux, however, I was facing difficulties on Linux as well, under "high load" scenarioes (i.e +50 concurrent requests) so this might apply to OS X as well...
I dug a little deeper into this problem and I found a solution to my setup - I'm no longer recieving 502 Bad Gateway errors when load testing my simple hello world application. I tested everyting on Ubuntu 13.10 with a fresh compile of Mono 3.2.3 installed in /opt/mono.
When you start the mono-fastcgi-4 server with "/verbose=True /printlog=True" you will notice the following output:
Root directory: /some/path/you/defined
Parsed unix:/tmp/nginx-1.sockets as URI unix:/tmp/nginx-1.sockets
Listening on file /tmp/nginx-1.sockets with default permissions
Max connections: 1024
Max requests: 1024
The important lines are "max connections" and "max requests". These basically tells how many active TCP connections and requests the mono-fastcgi server will be able to handle - in this case 1024.
My NGINX configuration read:
worker_processes 4;
events {
worker_connections 1024;
}
So I have 4 workers, which each can have 1024 connections. Thus NGINX happily accepts 4096 concurrent connections, which are then sent to mono-fastcgi (who only wishes to handle 1024 conns). Therefore, mono-fastcgi is "protecting it self" and stops serving requests. There are two solutions to this:
- Lower the amount of requests that NGINX can accept
- Increase your fastcgi upstream pool
1 is trivially solved by changing NGINX configuration to read something like:
worker_processes 4; # <-- or 1 here
events {
worker_connections 256; # <--- if 1 above, then 1024 here
}
However, this could verly likely mean that you're not able to max the resources on your machine.
The solution to 2. is a bit more tricky. First, mono-fastcgi must be started multiple times. For this I created the following script (inside the website that should be started):
function startFastcgi {
/opt/mono/bin/fastcgi-mono-server4 /loglevels=debug /printlog=true /multiplex=false /applications=/:`pwd` /socket=$1 &
}
startFastcgi 'unix:/tmp/nginx-0.sockets'
startFastcgi 'unix:/tmp/nginx-1.sockets'
startFastcgi 'unix:/tmp/nginx-2.sockets'
startFastcgi 'unix:/tmp/nginx-3.sockets'
chmod 777 /tmp/nginx-*
Which starts 4 mono-fastcgi workers that can each accept 1024 connections. Then NGINX should be configured something like this:
upstream servercom {
server unix:/tmp/nginx-0.sockets;
server unix:/tmp/nginx-1.sockets;
server unix:/tmp/nginx-2.sockets;
server unix:/tmp/nginx-3.sockets;
}
server {
listen 80;
location / {
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_pass servercom;
include fastcgi_params;
}
}
This configures NGINX with a pool of 4 "upstream workers" which it will use in a round-robin fashion. Now, when I'm hammering my server with Boom in concurrency 200 for 1 minute, it's all good (aka no 502 at all).
I hope you can somehow apply this to your code and make stuff work :)
P.S:
You can download my Hello World ServiceStack code that I used to test here.
And you can download my full NGINX.config here.
There are some paths that needs to be adjusted though, but it should serve as a good base.