Resize instance types on Container Engine cluster

2019-01-07 21:17发布

Some of our containers run better with memory above the instance type currently deployed in our Container Engine cluster. Is there a recommended practice to rebuild the container engine template for larger instances after the container engine cluster has been created?

For example, go from GCE instances n1-standard-2 to n1-highmem-8 for running containers with above 8GB RAM?

4条回答
虎瘦雄心在
2楼-- · 2019-01-07 21:49
查看更多
地球回转人心会变
3楼-- · 2019-01-07 21:54

go from GCE instances n1-standard-2 to n1-highmem-8 for running containers with above 8GB RAM?

Kubernetes 1.12 (Sept. 2018) should provide an official way to manage your existing resource with kubernetes issue 21 "Vertical Scaling of Pods" (Or "VPA": Vertical Pod Autoscaler").

As announced on the blog:

Vertical Scaling of Pods is now in beta, which makes it possible to vary the resource limits on a pod over its lifetime. In particular, this is valuable for pets (i.e., pods that are very costly to destroy and re-create).

Warning:

This is landing around 1.12 however it is a launch of an independent addon. It is not included in 1.12 Kubernetes release.
Sig-Architecture, at the beginning of this cycle, decided to keep the VPA API as CRD and thus not bind it to any particular K8S release.

See more in:

https://banzaicloud.com/img/blog/cluster-autoscaler/vertical-pod-autoscaler.png

That last article from BanzaiCloud is a bit dated (some links are no longer valid), but it still illustrates how you can manage your pod resources.

查看更多
Deceive 欺骗
4楼-- · 2019-01-07 21:56

Container Engine doesn't currently have an API for doing this, but since it uses a Compute Engine instance group for the nodes in your cluster, you can actually update it without needing GKE's help. In the Developers Console, copy the instance template that looks like "gke--" and modify the machine type in it, then edit the similarly named instance group to use the new template. You can find these options under Compute > Compute Engine > Instance templates and Compute > Compute Engine > Instance groups, respectively.

查看更多
Luminary・发光体
5楼-- · 2019-01-07 21:57

A different approach would be:

(1) to create a new node-pool to the GKE cluster with vertically scaled machine types ...

$ gcloud container node-pools create pool-n1std2 --zone europe-west1-d --cluster prod-cluster-1 --machine-type  n1-standard-2  --image-type gci --disk-size=250 --num-nodes 3

(2) then, migrate the workloads off the old nodes ...

$ kubectl drain gke-prod-cluster-1-default-pool-f1eabad5-9ml5 --delete-local-data --force

(3) and finally, to delete the old node-pool

$ gcloud container node-pools delete default-pool --cluster=prod-cluster-1

Notes:

  • Warning: Step 2 deletes node local volumes like emptyDir !!!
  • Step 2 needs to be repeated for each node in the pool
  • Instead of draining the nodes, one might configure a proper nodeSelector to schedule the pods onto the new pool. Label to be matched against would be cloud.google.com/gke-nodepool: pool-n1std2
查看更多
登录 后发表回答