I have been trying to run tomcat container on port 5000 on cluster using kubernetes. But when i am using kubectl create -f tmocat_pod.yaml , it creates pod but docker ps does not give any output. Why is it so?
Ideally, when it is running a pod, it means it is running a container inside that pod and that container is defined in yaml file. Why is that docker ps does not show any containers running? I am following the below URLs:
- http://containertutorials.com/get_started_kubernetes/k8s_example.html
- https://blog.jetstack.io/blog/k8s-getting-started-part2/
How can I get it running and see tomcat running on browser on port 5000.
In Kubernetes, Docker contaienrs are run on Pods, and Pods are run on Nodes, and Nodes are run on your machine (minikube/GKE)
When you run
kubectl create -f tmocat_pod.yaml
you basically create a pod and it runs the docker container on that pod.The node that holds this pod, is basically a virtual instance, if you could 'SSH' into that node, docker ps would work.
What you need is:
kubectl get pods
<-- It is like docker ps, it shows you all the pods (think of it as docker containers) runningkubectl get nodes
<-- view the host machines for your pods.kubectl describe pods <pod-name>
<-- view system logs for your pods.kubectl logs <pod-name>
<-- Will give you logs for the specific pod.If your pod is running successfully and if you are looking for the container on the node where the pod is scheduled the issue could be kubernetes is using a different container runtime.
Example
Here I am able exec to the pod, and I am in the same node where pod is scheduled, but
docker ps
doesn't show the container. In my case kubelet is using different container runtime, one of the argument to kubelet service is--container-runtime-endpoint=unix:///var/run/cri-containerd.sock
I'm not sure where you are running the
docker ps
command, but if you are trying to do that from your host machine and the k8s cluster is located elsewhere, i.e. your machine is not a node in the cluster,docker ps
will not return anything since the containers are not tied to your docker host.Assuming your pod is running,
kubectl get pods
will display all of your running pods. To check further details, you can usekubectl describe pod <yourpodname>
to check the status of each container (in great detail). To get the pod names, you should be able to use tab-complete with the kubernetes cli. Also, if your pod contains multiple containers, you will need to give the container name as well, which you can use tab-complete for after you've selected your pod.The output will look similar to:
If your containers and pods are already running, then you shouldn't need to troubleshoot them too much. To make them accessible from the Public Internet, take a look at Services (https://kubernetes.io/docs/concepts/services-networking/service/) to make your API's IP address fixed and easily reachable.
Have you tried a "docker ps -a" to see if the container is dead? If it is there you can see its logs with "docker logs " and maybe this gives you a hint.
The docker containers should be running on the virtual machine. Since I only installed minikube on my local machine, I confirmed the following will bring what you want:
Just try the kubernetes equivalent of
minikube ssh
.