No live upstreams while connecting to upstream, bu

2020-07-03 05:03发布

I have a really weird issue with NGINX.

I have the following upstream.conf file, with the following upstream:

upstream files_1 {
    least_conn;
    check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;

    server mymachine:6006 ;
}

In locations.conf:

location ~ "^/files(?<command>.+)/[0123]" {
        rewrite ^ $command break;
        proxy_pass https://files_1 ;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

In /etc/hosts:

127.0.0.1               localhost               mymachine

When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.

But when I send to the NGINX file server a request, I get the following error:

no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"

But the upstream is OK. What is the problem?

标签: nginx
1条回答
霸刀☆藐视天下
2楼-- · 2020-07-03 05:27

When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)

So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:

https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/

I had a similar problem and you can prevent this overriding those settings.

For example:

upstream files_1 {
    least_conn;
    check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
    server mymachine:6006 ;
}
查看更多
登录 后发表回答