nginx proxy to comet

2019-03-07 22:47发布

问题:

I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_polling for clients to connect to. My webapp runs on localhost:80.

I've used nginx to proxy requests from nginx to the comet server (localhost:80/long_polling proxied to localhost:8080/long_polling), however, I have two gripes with this solution:

  1. nginx gives me a 504 Gateway time-out after a minute, even though I changed EVERY single time out setting to 600 seconds
  2. I don't really want nginx to have to proxy to the comet server anyway - the nginx proxy is not built for long lasting connections (up to half an hour possibly). I would rather allow the clients to directly connect to the comet server, and let the comet server deal with it.

So my question is: is there any linux trick that allows me to expose localhost:8080/long_polling to localhost:80/long_polling without using the nginx proxy? There must be something. That's why I think this question can probably be best answered by a linux guru.

The reason I need /long_polling to be exposed on port 80 is so I can use AJAX to connect to it (ajax same-origin-policy).

This is my nginx proxy.conf for reference:

proxy_redirect              off;                                                                                                                         
proxy_set_header            Host $host;
proxy_set_header            X-Real-IP $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size        10m;
client_body_buffer_size     128k;
proxy_connect_timeout       600;
proxy_send_timeout          600;
proxy_read_timeout          600;
proxy_buffer_size           4k;
proxy_buffers               4 32k;
proxy_busy_buffers_size     64k;
proxy_temp_file_write_size  64k;
send_timeout                600;
proxy_buffering             off;

回答1:

i don't think, that is possible ...

localhost:8080/long_polling is a URI ... more exactly, it should be http://localhost:8080/long_polling ... in HTTP the URI would be resolved as requesting /long_polling, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sending

GET /long_polling HTTP/1.1
Host: localhost:8080

plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...

actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...

what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...

greetz

back2dos



回答2:

Here's my nginx.conf and my proxy.conf. Note however that the proxy.conf is way overkill - I was just setting all these settings while trying to debug my program.

/etc/nginx/nginx.conf

worker_processes  1;                                                                                                                                     
user www-data;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include /etc/nginx/proxy.conf;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    tcp_nopush     on;

    keepalive_timeout  600;
    tcp_nodelay        on;

    gzip  on;
    gzip_comp_level 2;
    gzip_proxied any;
    gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

/etc/nginx/proxy.conf

proxy_redirect              off;                                                                                                                         
proxy_set_header            Host $host;
proxy_set_header            X-Real-IP $remote_addr;
proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size        10m;
client_body_buffer_size     128k;
proxy_connect_timeout       6000;
proxy_send_timeout          6000;
proxy_read_timeout          6000;
proxy_buffer_size           4k;
proxy_buffers               4 32k;
proxy_busy_buffers_size     64k;
proxy_temp_file_write_size  64k;
send_timeout                6000;
proxy_buffering             off;
proxy_next_upstream error;


回答3:

I actually managed to get this working now. Thank you all. The reason nginx was 504 timing out was a silly one: I hadn't included proxy.conf in my nginx.conf like so:

include /etc/nginx/proxy.conf;

So, I'm keeping nginx as a frontend proxy to the COMET server.



回答4:

There is now a Comet plugin for Nginx. It will probably solve your issues quite nicely.

http://www.igvita.com/2009/10/21/nginx-comet-low-latency-server-push/



回答5:

Try

proxy_next_upstream error;

The default is

proxy_next_upstream error timeout;

The timeout cannot be more than 75 seconds.

http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream

http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout



回答6:

without doing some serious TCP/IP mungling, you can't expose two applications on the same TCP port on the same IP address. once nginx has started to service the connection, it can't pass it to other application, it can only proxy it.

so, either user another port, another IP number (could be on the same physical machine), or live with proxy.

edit: i guess nginx is timing out because it doesn't see any activity for a long time. maybe adding a null message every few minutes could keep the connection from failing.



回答7:

You might want to try listen(80) on the node.js server instead of 8080 (i presume you are using that as an async server?) and potentially miss out Ngnix altogether. I use connect middleware and express to server static files and deal with caching that would normally be handled by Ngnix. If you want to have multiple instances of node running (which I would advise), you might want to look into node.js itself as a proxy / load balancer to other node instances rather than Nginx as your gateway. I ran into a problem with this though when I was serving too many static image files at once but after I put the images on S3 it stabilized. Nginx MAY be overkill for what you are doing. Try it and see. Best of luck.



标签: nginx comet