I am trying to prepare a dev environment for my team, so we can develop, stage and deploy with the same (or near same) environment.
Getting a Kubernetes Cluster running locally via http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html was nice and simple. I could then use kubectl to start the pods and services for my application.
However, the services IP addresses are going to be different each time you start up. Which is a problem, if your code needs to use them. In Google Container Engine kube DNS means you can access a service by name. Which means the code that uses the service can remain constant between deployments.
Now, I know we could piece together the IP and PORT via environment variables, but I wanted to have an identical set up as possible.
So I followed some instructions found in various places, both here and in the Kubernetes repo like this.
Sure enough with a little editing of the yml files KubeDNS starts up.
But an nslookup on kubernetes.default fails. The health check on the DNS also fails (because it can't resolve the test look up) and the instance is shut down and restarted.
Running kubectl cluster-info
results in:
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
So all good. However, hitting that endpoint results in:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "no endpoints available for "kube-dns"",
code: 500
}
I am now at a loss, and know it is something obvious or easy to fix as it seems to all be working. Here is how I start up the cluster and DNS.
# Run etcd
docker run --net=host \
-d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd \
--addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
# Run the master
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.0.6 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" \
--address="0.0.0.0" --api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster_dns=10.0.0.10 --cluster_domain=cluster.local
# Run the service proxy
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.6 \
/hyperkube proxy --master=http://127.0.0.1:8080 --v=2
# forward local port - after this you should be able to user kubectl locally
machine=default; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -L 8080:localhost:8080
All the containers spin up ok, kubectl get nodes reports ok. Note I pass in the dns flags.
I then start the DNS rc with this file, which is the edited version from here
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v9
namespace: kube-system
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v9
template:
metadata:
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
limits:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.11
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- -domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
limits:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
Then start the service (again based on the file in the repo)
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
I made the assumption based on another SO question that clusterIP is the value I passed into the master, and not the ip of the host machine. I am sure it has to be something obvious or simple that I have missed. Anyone out there who can help?
Thanks!
UPDATE
I found this closed issue over in the GitHub repo. Seems I have an identical problem.
I have added to the thread on GitHub, and tried lots of things but still no progress. I tried using different images, but they had different errors (or the same error representing itself differently, I couldn't tell).
Everything relating to this that I have found suggests IP restrictions, or firewall/security settings. So I decided to curl the api from the container itself.
docker exec 49705c38846a echo $(curl http://0.0.0.0:8080/api/v1/services?labels=)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 908 100 908 0 0 314k 0 --:--:-- --:--:-- --:--:-- 443k
{ "kind": "ServiceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/services", "resourceVersion": "948" }, "items": [ { "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "369a9307-796e-11e5-87de-7a0704d1fdad", "resourceVersion": "6", "creationTimestamp": "2015-10-23T10:09:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 443, "nodePort": 0 } ], "clusterIP": "10.0.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } ] }
Seems like a valid response to me, so why the JSON parse error coming from kube2Sky!?
Failed to list *api.Service: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value
Failed to list *api.Endpoints: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value