I have an HTTP service running on a Google Container Engine cluster (behind a kubernetes service).
My goal is to access that service from a Dataflow job running on the same GCP project using a fixed name (in the same way services can be reached from inside GKE using DNS). Any idea?
- Most solutions I have read on stackoverflow relies on having kube-proxy installed on the machines trying to reach the service. As far as I know, it is not possible to reliably set up that service on every worker instance created by Dataflow.
- One option is to create an external balancer and create an A record in the public DNS. Although it works, I would rather not have an entry in my public DNS records pointing to that service.
EDIT: this is now supported on GKE (now known as Kubernetes Engine): https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
I have implemented this in a pretty smooth way IMHO. I will try to walk through briefly how it works:
NodePort
, which will expose the service at this port on all nodes, i.e all GCE instances in your cluster. This is what we want!See this spec for the service:
This is the code for setting up the load balancer with health checks, forwarding rules and firewall that it needs to work:
Lukasz's answer is probably the most straightforward way to expose your service to dataflow. But, if you really don't want a public IP and DNS record, you can use a GCE route to deliver traffic to your cluster's private IP range (something like option 1 in this answer).
This would let you hit your service's stable IP. I'm not sure how to get Kubernetes' internal DNS to resolve from Dataflow.
The Dataflow job running on GCP will not be part of the Google Container Engine cluster, so it will not have access to the internal cluster DNS by default.
Try setting up a load balancer for the service that you want to expose which knows how to route the "external" traffic to it. This will allow you to connect to the IP address directly from a Dataflow job executing on GCP.