Is it a way to add arbitrary record to kube-dns?

2019-03-13 09:35发布

问题:

I'll use very specific way to explain the problem, but I think this is better to be specific than abstract problem...

Say, there is a mongo db replica set outside of a kubernetes cluster but in a network. The ip addresses of all members of the replica set were resolved by /etc/hosts in app servers and db servers.

In an experiment/transition phase, I need to access those mongo db servers from kubernetes pods. However, kubernetes doesn't seem to allow adding custom entry to /etc/hosts in pods/containers.

The mongo db replica sets are already working with large data set, creating new a replica set in the cluster is not a option.

Becaseu I use GKE, changing any of resources in kube-dns namespace should be avoided I suppose. Configuring or replace kube-dns to be suitable for my need are last thing to try.

Is is a way to resolve ip address of custom hostnames in a kubernetes cluster?

It is just an idea, but if kube2sky can read some entries of configmap and use them as dns records, it colud be great. e.g. repl1.mongo.local: 192.168.10.100.

EDIT: I referenced this question from https://github.com/kubernetes/kubernetes/issues/12337

回答1:

A type of External Name is required to access hosts or ips outside of the kubernetes.

The following worked for me.

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "tiny-server-5",
        "namespace": "default"
    },
    "spec": {
        "type": "ExternalName",
        "externalName": "192.168.1.15",
        "ports": [{ "port": 80 }]
    }
}


回答2:

For the record, an alternate solution for those not checking the referenced github issue.

You can define an "external" Service in Kubernetes, by not specifying any selector or ClusterIP. You have to also define a corresponding Endpoint pointing to your external IP.

From the Kubernetes documentation:

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "spec": {
        "ports": [
            {
                "protocol": "TCP",
                "port": 80,
                "targetPort": 9376
            }
        ]
    }
}
{
    "kind": "Endpoints",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "subsets": [
        {
            "addresses": [
                { "ip": "1.2.3.4" }
            ],
            "ports": [
                { "port": 9376 }
            ]
        }
    ]
}

With this, you can point your app inside the containers to my-service:9376 and the traffic should be forwarded to 1.2.3.4:9376

Limitations:

  • The DNS name used needs to be only letters, numbers or dashes. You can't use multi-level names (something.like.this). This means you probably have to modify your app to point just to your-service, and not yourservice.domain.tld.
  • You can only point to a specific IP, not a DNS name. For that, you can define a kind of a DNS alias with an ExternalName type Service.


回答3:

UPDATE: 2017-07-03 Kunbernetes 1.7 now support Adding entries to Pod /etc/hosts with HostAliases.


The solution is not about kube-dns, but /etc/hosts. Anyway, following trick seems to work so far...

EDIT: Changing /etc/hosts may has race condition with kubernetes system. Let it retry.

1) create a configMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: db-hosts
data:
  hosts: |
    10.0.0.1  db1
    10.0.0.2  db2

2) Add a script named ensure_hosts.sh.

#!/bin/sh                                                                                                           
while true
do
    grep db1 /etc/hosts > /dev/null || cat /mnt/hosts.append/hosts >> /etc/hosts
    sleep 5
done

Don't forget chmod a+x ensure_hosts.sh.

3) Add a wrapper script start.sh your image

#!/bin/sh
$(dirname "$(realpath "$0")")/ensure_hosts.sh &
exec your-app args...

Don't forget chmod a+x start.sh

4) Use the configmap as a volume and run start.sh

apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
  template:
    ...
    spec:
      volumes:
      - name: hosts-volume
        configMap:
          name: db-hosts
      ...
      containers:
        command:
        - ./start.sh
        ...
        volumeMounts:
        - name: hosts-volume
          mountPath: /mnt/hosts.append
        ...


回答4:

Use configMap seems better way to set DNS, but it's a little bit heavy when just add a few record (in my opinion). So I add records to /etc/hosts by shell script executed by docker CMD.

for example:

Dockerfile

...(ignore)
COPY run.sh /tmp/run.sh
CMD bash /tmp/run.sh

run.sh

#!/bin/bash
echo repl1.mongo.local 192.168.10.100 >> /etc/hosts
# some else command...

Notice, if your run MORE THAN ONE container in a pod, you have to add script in each container, because kubernetes start container randomly, /etc/hosts may be override by another container (which start later).