I am getting a lot of 499 nginx error codes. I see that this is a client side issue. It is not a problem with Nginx or my uWSGI stack. I note the correlation in uWSGI logs when a get a 499.
address space usage: 383692800 bytes/365MB} {rss usage: 167038976
bytes/159MB} [pid: 16614|app: 0|req: 74184/222373] 74.125.191.16 ()
{36 vars in 481 bytes} [Fri Oct 19 10:07:07 2012] POST /bidder/ =>
generated 0 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 59 bytes (1
switches on core 1760)
SIGPIPE: writing to a closed pipe/socket/fd (probably the client
disconnected) on request /bidder/ (ip 74.125.xxx.xxx) !!!
Fri Oct 19 10:07:07 2012 - write(): Broken pipe [proto/uwsgi.c line
143] during POST /bidder/ (74.125.xxx.xxx)
IOError: write error
I am looking for a more indepth explanation and hoping it is nothing wrong with my nginx config for uwsgi. I am taking it on face value...its not a me problem..its a client issue.
Thanks
HTTP 499 in Nginx means that the client closed the connection before the server answered the request. In my experience is usually caused by client side timeout. As I know it's an Nginx specific error code.
In my case, I was impatient and ended up misinterpreting the log.
In fact the real problem was the communication between nginx and uwsgi, and not between the browser and nginx. If I had loaded the site in my browser, and had waited long enough I would have gotten a "504 - Bad Gateway". But it took so long, that I kept trying stuff, and then refresh in the browser. So I never waited long enough to see the 504 error. When refreshing in the browser, that is when the previous request is closed, and Nginx writes that in the log as 499.
Elaboration
Here I will assume that the reader knows as littele as I did, when I started playing around.
My setup was a reverse proxy, the nginx server, and an application server, the uWSGI server behind it. All request from the client would go to the nginx server, then forwarded to the uWSGI server, and then a response was sent the same way back. I think this is how everyone uses nginx/uwsgi, and are supposed to use it.
My nginx worked as it should, but something was wrong with the uwsgi server. There are two ways (maybe more) in which the uwsgi server can fail to respond to the nginx server.
1) uWSGI says, "I'm processing, just wait and you will soon get a response". nginx has a certain period of time, that it is willing to wait, fx 20 seconds. After that it will respond to the client, with a 504 error.
2) uWSGI is dead, or uWSGi dies while nginx is waiting for it. nginx sees that right away, and in that case it returns a 499 error.
I was testing my setup by making requests in the client (browser). In the browser nothing happened, it just kept hanging. After maybe 10 seconds (less than the timeout) I concluded that something was not right (which was true), and closed the uWSGI server from the command line. Then I would go to the uWSGI settings, try something new, and then restart the uWSGI server. The moment I closed the uWSGI server, the nginx server would return a 499 error.
So I kept debugging with the 499 erroe, that means googling for the 499 error. But if I had waited long enough, I would have gotten the 504 error. If I had gotten the 504 error, I would have been able to understand the problem better, and then be able to debug.
So the conclusion is, that the problem was with uWGSI, which kept hanging ("Wait a little longer, just a little longer, then I will have an answer for you...").
How I fixed that problem, I don't remember. I guess it could be caused by a lot of things.
Client closed the connection doesn't mean it's a browser issue!? Not at all!
You can find 499 errors in a log file if you have a LB (load balancer) in front of your webserver (nginx) either AWS or haproxy (custom). That said the LB will act as a client to nginx.
If you run haproxy default values for:
timeout client 60000
timeout server 60000
That would mean that LB will time out after 60000ms if there is no respond from nginx. Time outs might happen for busy websites or scripts that need more time for execution. You'll need to find timeout that will work for you. For example extend it to:
timeout client 180s
timeout server 180s
And you will be probably set.
Depending on your setup you might see a 504 gateway timeout error in your browser which indicates that something is wrong with php-fpm but that will not be the case with 499 errors in your log files.
...came here from a google search
I found the answer elsewhere here --> https://stackoverflow.com/a/15621223/1093174
which was to raise the connection idle timeout of my AWS elastic load balancer!
(I had setup a Django site with nginx/apache reverse proxy, and a really really really log backend job/view was timing out)
This error is pretty easy to reproduce using standard nginx configuration with php-fpm.
Keeping the F5 button down on a page will create dozens of refresh requests to the server. Each previous request is canceled by the browser at new refresh. In my case I found dozens of 499's in my client's online shop log file. From an nginx point of view: If the response has not been delivered to the client before the next refresh request nginx logs the 499 error.
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:32 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:33 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:34 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:35 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
mydomain.com.log:84.240.77.112 - - [19/Jun/2018:09:07:35 +0200] "GET /(path) HTTP/2.0" 499 0 "-" (user-agent-string)
If the php-fpm processing takes longer (like a heavyish WP page) it may cause problems, of course. I have heard of php-fpm crashes, for instance, but I believe they can be prevented configuring services properly like handling calls to xmlrpc.php.
In my case I got 499 when the client's API closed the connection before it gets any response. Literally sent a POST and immediately close the connection.
This is resolved by option:
proxy_ignore_client_abort on
Nginx doc
Once I got 499 "Request has been forbidden by antivirus" as an AJAX http response (false positive by Kaspersky Internet Security with light heuristic analysis, deep heuristic analysis knew correctly there was nothing wrong).
One of the reasons for this behaviour could be you are using http
for uwsgi
instead of socket
. Use the below command if you are using uwsgi
directly.
uwsgi --socket :8080 --module app-name.wsgi
Same command in .ini file is
chdir = /path/to/app/folder
socket = :8080
module = app-name.wsgi
I encountered this issue and the cause was due to Kaspersky Protection plugin on the browser. If you are encountering this, try to disable your plugins and see if that fixes your issue.
Many cases cause 499 error,one of my case is, Content-Length field missed in http request from a pocco client