I am new to kubernetes. I have an issue in the pods. When I run the command
kubectl get pods
Result:
NAME READY STATUS RESTARTS AGE
mysql-apim-db-1viwg 1/1 Running 1 20h
mysql-govdb-qioee 1/1 Running 1 20h
mysql-userdb-l8q8c 1/1 Running 0 20h
wso2am-default-813fy 0/1 ImagePullBackOff 0 20h
Due to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion?
If the
Pod
is part of aDeployment
orService
, deleting it will restart thePod
and, potentially, place it onto another node:$ kubectl delete po $POD_NAME
replace
it if it's an individualPod
:$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -
Try with deleting pod it will try to pull image again.
kubectl delete pod <pod_name> -n <namespace_name>
In case of not having the yaml file:
kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -
Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:
First try to see what's wrong with the pod:
In my case it was a problem with the YAML file.
So, I needed to correct the configuration file and replace it:
if all goes well, you should see something like:
details of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.