I'd like to know the current limit on the RAM. (No limit/request was explicitly configured.)
How do I see the current configuration of an existing pod?
[Edit] That configuration would include not only how much memory is now in use, but also the max-limit, the point at which it would be shutdown.
(Actually blowing up the heap with huge strings shows a limit of approx 4 GB, and the Cloud Console shows a crash at 5.4 GB (which of course includes more than the Python interpreter), but I don't know where this comes from. The Nodes have up to 10 GB.)
I tried kubectl get pod id-for-the-pod -o yaml
, but it shows nothing about memory.
I am using Google Container Engine.
Use kubectl top command
kubectl top pod id-for-the-pod
kubectl top --help
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes
or pods.
This command requires Heapster to be correctly configured and working
on the server.
Available Commands: node Display Resource
(CPU/Memory/Storage) usage of nodes pod Display Resource
(CPU/Memory/Storage) usage of pods
Usage: kubectl top [flags] [options]
You can use
kubectl top pod POD_NAME
It will show you memory and CPU usage.
As already answered by the community, you can run "kubectl top pod POD_NAME" to get how much memory your pod is using. The max limit actually depends on the available memory of nodes (You may get an idea of CPU Requests and CPU Limits of nodes by running "kubectl describe nodes"). Furthermore, the max limit of the pod also depends on its memory requests and limits as defined in the pod's configuration ("requests" and "limits" specs under "resources"). You can also read this relevant link.
The edit in the question asks how to see the max memory limit for an existing pod. This shold do:
kubectl -n <namespace> exec <pod-name> cat /sys/fs/cgroup/memory/memory.limit_in_bytes
Reference: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt
With QoS class of BestEffort (seen in the output from kubectl -n <namespace> get pod <pod-name> -o yaml
or kubectl -n <namespace> describe pod <pod-name>
), there may be no limits (other than the available memory on the node where the pod is running) so the value returned can be a large number (e.g. 9223372036854771712 - see here for an explanation).
Deploy Metrics Server in Kubernetes Cluster (Heapster is deprecated) and then use
kubectl top POD_NAME
to get pod CPU and memory usages.