Just I am starting to learn Kubernetes. I've installed CentOS 7.5 with SELinux disabled kubectl, kubeadm and kubelet by Kubernetes YUM repository.
However, when I want to start a kubeadm init
command. I get this error message:
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [51.75.201.75 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vps604805.ovh.net localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vps604805.ovh.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 51.75.201.75]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 26.003496 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vps604805.ovh.net as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vps604805.ovh.net as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
According to Linux Foundation course, I don't need more command to execute to create my first start cluster into my VM.
Wrong?
Firewalld does have open ports into firewall. 6443/tcp and 10248-10252
You are hitting the following issue in kubernetes
https://github.com/kubernetes/kubeadm/issues/1092
The workaround is to provide
--node-name=<hostname>
. Just go through the above ticket for more info. Hope this helpsEDIT: I have the same issue in kubeadm-1.10.0 After removing --hostname-override from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, atleast able to initialize cluster. Didn't give provide --node-name in my cluster
I would recommend to bootstrap Kubernetes cluster as guided in the official documentation. I've proceeded with some steps to build cluster on the same CentOS version
CentOS Linux release 7.5.1804 (Core)
and will share them with you, hope it can be helpful to you to get rid of the issue during installation.First wipe your current cluster installation:
Add Kubernetes repo for further
kubeadm
,kubelet
,kubectl
installation:Check whether
SELinux
is in permissive mode:Ensure
net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl:Install required Kubernetes components and start services:
Deploy the cluster via
kubeadm
:I prefer to install
Flannel
as the mainCNI
in my cluster, although there are some prerequisites for proper Pod network installation, I've passed--pod-network-cidr=10.244.0.0/16
flag tokubeadm init
command.Create Kubernetes Home directory for your user and store
config
file:Install Pod network, in my case it was
Flannel
:$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Finally check Kubernetes core Pods status:
$ kubectl get pods --all-namespaces
In case you still have any doubts, just write down a comment below this answer.