How to call a service exposed by a Kubernetes clus

2019-01-11 11:09发布

问题:

I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2.

I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2.

Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?

回答1:

I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network:

  1. Bastion route into k2 for all of k2's services:

    Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe:

    $ gcloud beta container clusters describe k2
    ...
    servicesIpv4Cidr: 10.143.240.0/20
    ...
    

    Add an advanced routing rule to take traffic destined for that range and route it to a node in k2:

    $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0
    

    This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple.

  2. Install k2's kube-proxy on all nodes in k1.

    Take a look at the currently running kube-proxy on any node in k2:

    $ ps aux | grep kube-proxy
    ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
    

    Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node:

    $ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247
    

    Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties.

As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.



回答2:

GKE now supports Internal Load Balancers: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

It's primary use case is to have a load balancer that's not exposed to the public internet so a service running on GKE can be reached from other GCE VMs or other GKE clusters in the same network.