I've been banging my head on this issue for days now and have finally reached a brick wall.
I've been trying to get my stack to run:
I've been looking at some other SO articles like this one:
nginx - uWSGI HTTP + websocket config
They seem to have a similar issue i am encountering but the solution does not work for me.
Basically, i keep encountering the nginx 502 bad gateway screen whenever i try starting up my uWSGI processes. I have two separate uwsgi processes running, as per instructions in the documentation.
When i run the websocket uwsgi instance, i get the following:
*** running gevent loop engine [addr:0x487690] ***
[2015-05-27 00:45:34,119 wsgi_server] DEBUG: Subscribed to channels: subscribe-broadcast, publish-broadcast
which tells me that that uwsgi instance is running okay. Then i run my next uwsgi process and no error logs there either...
When i navigate to the page in the browser, the page with hang for a few seconds, before getting the 502 Bad Gateway Screen.
According to NGINX logs, NGINX says:
2015/05/26 22:46:08 [error] 18044#0: *3855 upstream prematurely closed connection while reading response header from upstream, client: 192.168.59.3, server: , request: "GET /chat/ HTTP/1.1", upstream: "uwsgi://unix:/opt/django/django.sock:", host: "192.168.59.103:32768"
This is the only error log i get when trying to access the page in the web browser.
Any ideas anyone???
Below are some of my config files:
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/django.conf;
}
I have the following django.conf file, which extends nginx.conf
upstream django {
server unix:/opt/django/django.sock;
}
server {
listen 80 default_server;
charset utf-8;
client_max_body_size 20M;
sendfile on;
keepalive_timeout 0;
large_client_header_buffers 8 32k;
location /media {
alias /opt/django/app/media/media;
}
location /static {
alias /opt/django/app/static;
}
location / {
include /opt/django/uwsgi_params;
}
location /ws/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/opt/django/app.sock;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
And two files that are responsible for my uwsgi processes as follows:
runserver_uwsgi.ini:
[uwsgi]
ini = :runserver
[default]
userhome = /opt/django
chdir = %dapp/
master = true
module = chatserver.wsgi:application
no-orphans = true
threads = 1
env = DJANGO_SETTINGS_MODULE=myapp.settings
vacuum = true
[runserver]
ini = :default
socket = /opt/django/app.sock
module = wsgi_django
buffer-size = 32768
processes = 4
chmod-socket=666
and wsserver_uwsgi.ini
[uwsgi]
ini = :wsserver
[default]
userhome = /opt/django
chdir = %dapp/
master = true
module = chatserver.wsgi:application
no-orphans = true
threads = 1
env = DJANGO_SETTINGS_MODULE=chatserver.settings
vacuum = true
[wsserver]
ini = :default
http-socket = /opt/django/django.sock
module = wsgi_websocket
http-websockets = true
processes = 2
gevent = 1000
chmod-socket=666
I had same issue but it wasn't my NGINX configuration, it was my UWSGI processes causing timeout errors when I posted JSONs from client side to server. I had processes as 5, I changed it to 1 and it solved the issue. For my application, I only needed to have 1 process run at time since AWS didn't need to be overloaded with multiple processes.
Here is the working UWSGI configuration ini file that solved the timeout issue and thus the 502 gateway issue.
autoboot.ini
Here's my nginx config too.
nginx.conf
I found the issue.
My [runserver] socket (app.sock) should be pointed under
upstream django
and my [wsserver] socket (django.sock) should be pointed underlocation /ws/
like so: