Reset Kubernetes cluster

2020-04-14 01:23发布

I have six desktop machines in my network and I want to build two Kubernetes clusters. Each machine has Ubuntu 16.04 LTS installed. Initially, all the machines were part of a single cluster. However, I removed three of the machines to setup another cluster, and executed the following command on each of these machine:

RESET COMMAND:
sudo kubeadm reset -f && 
 sudo systemctl stop kubelet && 
 sudo systemctl stop docker && 
 sudo rm -rf /var/lib/cni/ && 
 sudo rm -rf /var/lib/kubelet/* && 
 sudo rm -rf /etc/cni/ && 
 sudo ifconfig cni0 down && 
 sudo ifconfig flannel.1 down && 
 sudo ifconfig docker0 down && 
 sudo ip link delete cni0 && 
 sudo ip link delete flannel.1

After this I rebooted each machine, and proceeded with the setup of a new cluster, by setting up the master node:

INSTALL COMMAND:
sudo kubeadm init phase certs all && 
 sudo kubeadm init phase kubeconfig all && 
 sudo kubeadm init phase control-plane all --pod-network-cidr 10.244.0.0/16 &&
 sudo sed -i 's/initialDelaySeconds: [0-9][0-9]/initialDelaySeconds: 240/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo sed -i 's/failureThreshold: [0-9]/failureThreshold: 18/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo sed -i 's/timeoutSeconds: [0-9][0-9]/timeoutSeconds: 20/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
 sudo kubeadm init \
   --v=1 \
   --skip-phases=certs,kubeconfig,control-plane \
   --ignore-preflight-errors=all \
   --pod-network-cidr 10.244.0.0/16  

After this I also installed flannel. After the master was successfully installed, I proceeded with the kubeadm join to add the other two machines. After these machines were added, I installed the NGINX-Ingress on the master node.

Now, I wanted to reset the cluster and to re-do this setup again. I reset each machine using the RESET COMMAND and proceeded with the INSTALL command on the master node. However, after I ran the INSTALL command and ran kubectl get pods --all-namespaces I can still see the pods from the previous installation:

NAMESPACE       NAME                              READY   STATUS              RESTARTS   AGE
kube-system     coredns-fb8b8dccf-h5hhk           0/1     ContainerCreating   1          20h
kube-system     coredns-fb8b8dccf-jblmv           0/1     ContainerCreating   1          20h
kube-system     etcd-ubuntu6                      1/1     Running             0          19h
kube-system     kube-apiserver-ubuntu6            1/1     Running             0          76m
kube-system     kube-controller-manager-ubuntu6   0/1     CrashLoopBackOff    7          75m
kube-system     kube-flannel-ds-amd64-4pqq6       1/1     Running             0          20h
kube-system     kube-flannel-ds-amd64-dvfmp       0/1     CrashLoopBackOff    7          20h
kube-system     kube-flannel-ds-amd64-dz9st       1/1     Terminating         0          20h
kube-system     kube-proxy-9vfjx                  1/1     Running             0          20h
kube-system     kube-proxy-q5c86                  1/1     Running             0          20h
kube-system     kube-proxy-zlw4v                  1/1     Running             0          20h
kube-system     kube-scheduler-ubuntu6            1/1     Running             0          76m
nginx-ingress   nginx-ingress-6957586bf6-fg2tt    0/1     Terminating         22         19h

Why am I seeing the pods from the previous installation?

1条回答
够拽才男人
2楼-- · 2020-04-14 02:00

So yes, basically when you create a single control-plane cluster using kubeadm - you are installing cluster that has single control-plane node, with a single etcd database running on it.

The default etcd directory used by kubeadm is /var/lib/etcd on the control-plane node. You should clean it up to avoid restoring previous cluster configuration.

BTW, there is the same issue for k8s 1.15. And it should be fixed 1.15.1 https://github.com/kubernetes/sig-release/blob/3a3c9f92ef484656f0cb4867f32491777d629952/releases/patch-releases.md#115

查看更多
登录 后发表回答