Ingress gives 502 error

2020-06-09 05:32发布

If I run through the http load balancer example it works fine in my google container engine project. When I run "kubectl describe ing" the backend is "HEALTHY". If I then change the svc out to one that points to my app as shown here:

apiVersion: v1
kind: Service
metadata:
  name: app
  labels:
    name: app
spec:
  ports:
  - port: 8000
    name: http
    targetPort: 8000
  selector:
    name: app
  type: NodePort

The app I'm running is django behind gunicorn and works just find if I make that a load balancer instead of a NodePort.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: main-ingress
spec:
  backend:
    serviceName: app
    servicePort: 8000

Now when I run "kubectl describe ing" the backend is listed as "UNHEALTHY" and all requests to the ingress IP give a 502.

  1. Is the 502 a symptom of the bad health check?
  2. What do I have to do to make the health check pass? I'm pretty sure the container running my app is actually healthy. I never set up a health check so I'm assuming I have to configure something that is not configured, but my googling hasn't gotten me anywhere.

4条回答
仙女界的扛把子
2楼-- · 2020-06-09 06:07

After a lot of digging I found the answer: According to the requirements here: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites the application must return a 200 status code at '/'. Because my application was returning a 302 (redirect to login), the health check was failing. When the health check fails, the ingress resource returns 502.

查看更多
狗以群分
3楼-- · 2020-06-09 06:18

In our case external port and internalport are mentioned as 5000 in values.yaml , but service is listening on 3000 port ( came to know after seeing pod logs ) so 502 bad gateway has been displayed .

Once I updated external port and internal port to 3000 and upgraded the deployment of that particular service ,able to see required output .

查看更多
ゆ 、 Hurt°
4楼-- · 2020-06-09 06:19

In my case the reason was not running ingress-controller pod, after cluster crash. The easiest way to detect it is to list ingresses with

kubectl get ingress

Field address should be filled, in my case there was empty value

I listed pods for ingress-nginx namespace

kubectl get pods -n ingress-nginx

and found that pod not running

NAME                                       READY   STATUS             RESTARTS   AGE
nginx-ingress-controller-95db98cc5-rp5c4   0/1     CrashLoopBackOff   218        18h

the reason was that pod scheduled to master node where port 80 is busy with external nginx. I simply deleted pod with

kubectl delete pod nginx-ingress-controller-95db98cc5-rp5c4 -n ingress-nginx

and it reschedulled to worker node. That's it. 502 error gone

查看更多
地球回转人心会变
5楼-- · 2020-06-09 06:24

I just wanted to supplement the accepted answer with a more concrete explanation of why a health check on / is needed, even though livenessProbe and readinessProbe may be set up and working on the containers in the pods.

I had originally thought they were the same thing, but they're not.

The probes are used by the kubernetes engine to manage individual containers within a service. Whereas the health check on / is at the service level, and is part of the GCP load balancer contract. It's got nothing to do with kubernetes or containers per-se.

The reason it's needed with GKE is that the GCP load balancer is the default ingress controller. As stated in the docs, the GCP load balancer requires backing services to return a 200 on / to check if they're live so it can manage which ones to route to. This can't be configured, you just have to do it.

Possibly, if you use a different ingress controller, like the nginx one, you might be able to configure this. But with the out of the box GCP load balancer, you just have to do it. It's in addition to, and entirely unrelated to, any livenessProbe and readinessProbe that may or may not be configured on containers inside your service(s).

查看更多
登录 后发表回答