I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh
.
Cluster is using kube-dns
.
$ kubectl get svc kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
Which works fine.
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal
Name: kubernetes.default
Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal
This is the resolv.conf
of a pod.
$ kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.0.0.10
nameserver 172.20.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
Is it possible to have the containers use an additional nameserver?
I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.
ps. A kubernetes 1.1 solution would also be acceptable :)
Thank you very much in advance,
George
The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf
setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf
. The kubelet
also takes a --resolv-conf
argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
In Kuberenetes (probably) 1.2 we'll be moving to a model where nameservers
are assumed to be fungible. There are too many resolvers that break when different nameservers serve different subsets of DNS, and there is no real specification here that we can point to.
In other words, we'll start dropping the host's nameserver records from the container's merged resolv.conf and making our own DNS server the only nameserver
line. Our DNS will be able to forward requests to upstream nameservers.
For those usign Kubernetes kube-dns
, flag -nameservers
nor environment variable SKYDNS_NAMESERVERS
are no longer avaiable.
Usage of /kube-dns:
--alsologtostderr log to standard error as well as files
--config-map string config-map name. If empty, then the config-map will not used. Cannot be used in conjunction with federations flag. config-map contains dynamically adjustable configuration.
--config-map-namespace string namespace for the config-map (default "kube-system")
--dns-bind-address string address on which to serve DNS requests. (default "0.0.0.0")
--dns-port int port on which to serve DNS requests. (default 53)
--domain string domain under which to create names (default "cluster.local.")
--healthz-port int port on which to serve a kube-dns HTTP readiness probe. (default 8081)
--kube-master-url string URL to reach kubernetes master. Env variables in this flag will be expanded.
--kubecfg-file string Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Now, either you put your name servers on the hosts resolv.conf
, so DNS is inherited from the node, or you use custom resolv.conf
and add it to Kubelet with the flag --resolv-conf
as explained here
I eventually managed to solve this pretty easily by configuring SkyDNS to add an additional nameserver, you can just add the environmental variable SKYDNS_NAMESERVERS
as defined in the SkyDNS docs in your SkyDNS replication controller. It has minimal impact and does not depend on node changes etc.
env:
- name: SKYDNS_NAMESERVERS
value: 10.0.0.254:53,10.0.64.254:53