Is the Google Container Engine Kubernetes Service

2019-02-28 10:21发布

问题:

Question: Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening? "This target pool has no health check, so traffic will be sent to all instances regardless of their status."

I have a service (NGINX reverse proxy) that targets specific pods and makes TCP: 80, 443 available. In my example only 1 NGINX pod is running within the instance pool. The Service type is "LoadBalancer". Using Google Container Engine this creates a new LoadBalancer (LB) that specifies target pools, specific VM Instances. Then a ephemeral external IP address for the LB and an associated Firewall rule that allows incoming traffic is created.

My issue is that the Kubernetes auto-generated firewall rule description is "KubernetesAutoGenerated_OnlyAllowTrafficForDestinationIP_1.1.1.1" (IP is the LB external IP). In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP. This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually.

I have proper firewall rules so that any IP address may contact TCP 443, 80 on any instance within my pool, so that's not the issue.

Can someone explain this to me because it makes me think that the LB is passing HTTP requests to both instances despite only one of those instances having the NGINX pod running on it.

回答1:

Is the Google Cloud network LoadBalancer that's created by Kubernetes (via Google Container Engine) sending traffic to hosts that aren't listening?

All hosts (that are currently running a functional kube-proxy process) are capable of receiving and handling incoming requests for the externalized service. The requests will land on an arbitrary node VM in your cluster, match an iptables rule and be forwarded (by kube-proxy process) to a pod that has a label selector that matches the service.

So the case where a healthchecker would prevent requests from being dropped is if you had a node VM that was running in a broken state. The VM would still have the target tag matching the forwarding rule but wouldn't be able to handle the incoming packets.

In testing I've noticed that even though each VM Instance has a external IP address I cannot contact it on port 80 or 443 on either of the instance IP addresses, only the LB IP.

This is working as intended. Each service can use any port that is desires, meaning that multiple services can use ports 80 and 443. If a packet arrives on the host IP on port 80, the host has no way to know which of the (possibly many) services using port 80 the packet should be forwarded to. The iptables rules for services handle packets that are destined to the virtual internal cluster service IP and the external service IP, but not the host IP.

This isn't bad for external user traffic but when I tried to create a Health Check for my LB I found that it always saw the services as unavailable when it checked each VM Instance individually.

If you want to set up a healthcheck to verify that a node is working properly, you can healthcheck the kubelet process that is running on port 10250 by installing a firewall rule:

$ gcloud compute firewall-rules create kubelet-healthchecks \
  --source-ranges 130.211.0.0/22 \
  --target-tags $TAG \
  --allow tcp:10250

(check out the Container Engine HTTP Load Balancer documentation to help find what you should be using for $TAG).

It would be better to health check the kube-proxy process directly, but it only binds to localhost, whereas the kubelet process binds to all interfaces so it is reachable by the health checkers and it should serve as a good indicator that the node is healthy enough to serve requests to your service.