I am currently ALWAYS getting a 502 on a query my users are doing... which usually returns 872 rows and takes 2.07 to run in MySQL. It is however returning a LOT of information. (Each row contains a lot of stuff). Any ideas?
Running the Django (tastypie Rest API), Nginx and uWSGI stack.
Server Config with NGINX
# the upstream component nginx needs to connect to
upstream django {
server unix:///srv/www/poka/app/poka/nginx/poka.sock; # for a file socket
}
# configuration of the server
server {
# the port your site will be served on
listen 443;
# the domain name it will serve for
server_name xxxx; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 750M; # adjust to taste
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /srv/www/poka/app/poka/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
UWSGI config
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 2
# the socket (use the full path to be safe
socket = /srv/www/poka/app/poka/nginx/poka.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
# clear environment on exit
vacuum = true
pidfile = /tmp/project-master.pid # create a pidfile
harakiri = 120 # respawn processes taking more than 20 seconds
max-requests = 5000 # respawn processes after serving 5000 requests
daemonize = /var/log/uwsgi/poka.log # background the process & log
log-maxsize = 10000000
#http://uwsgi-docs.readthedocs.org/en/latest/Options.html#post-buffering
post-buffering=1
logto = /var/log/uwsgi/poka.log # background the process & log
This is unlikely to be an nginx config issue.
It's almost certainly that the backend is actually crashing (or just terminating the connection) rather than giving a malformed response. i.e. the error message is telling you what the problem is, but you're looking in the wrong place to solve it.
You don't give enough information to allow use to figure out what the exact issue is but if I had to guess:
which usually returns 872 rows and takes 2.07 to run in MySQL. It is however returning a LOT of information.
It's either timing out somewhere or running out of memory.
I had the same issue, what fixed it for me is adding my domain in the
settings.py e.g.:
ALLOWED_HOSTS = ['.mydomain.com', '127.0.0.1', 'localhost']
By same issue, I mean I couldn't even load the page, nginx would return a 502 without serving any pages where I could cause the application to crash.
And the nginx log contained:
Error: upstream prematurely closed connection while reading response header from upstream
In your @django location block you can try adding some proxy read and connect timeout properties. e.g.
location @django {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
# proxy header definitions
...
proxy_pass http://django;
}
Sometimes it may be an authority problem. Check the project directory's authority.
It might be an uwsgi configuration issue instead of Nginx. I saw that you had uwsgi processes = 2 and harakiri = 120, have you tried changing those as well as other fields there one by one?
I had same issue but it wasn't my NGINX configuration, it was my UWSGI processes causing timeout errors when I posted JSONs from client side to server. I had processes as 5, I changed it to 1 and it solved the issue. For my application, I only needed to have 1 process run at time.
Here is the working UWSGI configuration autoboot ini file that solved the timeout issue and thus the 502 gateway issue (upstream closed prematurely).
autoboot.ini
#!/bin/bash
[uwsgi]
socket = /tmp/app.sock
master = true
chmod-socket = 660
module = app.wsgi
chdir = home/app
close-on-exec = true # Allow linux shell via uWSGI
processes = 1
threads = 2
vacuum = true
die-on-term = true
Hope it helps.