Difference between API versions v2beta1 and v2beta

2020-08-23 09:35发布

The Kubernetes Horizontal Pod Autoscaler walkthrough in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it.

Thanks in advance.

4条回答
Root(大扎)
2楼-- · 2020-08-23 09:50

Just like any other software product, k8 is also release new version with new feature. In k8, every object is specified with api version. With each new api version, k8 object get new features or additional capabilities.

So in case of HPA, beta2 has some more features than beta1 which are mentioned in documentation. So always remember to use stable release(exp. V1) if not available use latest release ( beta2 in case of HPA) for k8 object.

查看更多
迷人小祖宗
3楼-- · 2020-08-23 09:59

The first metrics autoscaling/V2beta1 doesn't allow you to scale your pods based on custom metrics. That only allows you to scale your application based on CPU and memory utilization of your application

The second metrics autoscaling/V2beta2 allows users to autoscale based on custom metrics. It allow autoscaling based on metrics coming from outside of Kubernetes. A new External metric source is added in this api.

metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

It will identify a specific metric to autoscale on based on metric name and a label selector. Those metrics can come from anywhere like a stackdriver or prometheus monitoring application and based on some query from prometheus you want to scale your application.

It would always better to use V2beta2 api because it can do scaling on CPU and memory as well as on custom metrics, while V2beta1 API can scale only on internal metrics.

The snippet I mentioned in answer denotes how you can specify the target CPU utilisation in V2beta2 API

查看更多
冷血范
4楼-- · 2020-08-23 10:04

In Kubernetes 1.18, autoscaling/v2beta2 adds a new api field spec.behavior which allows you to define how fast or slow pods are scaled up and down. autoscaling/v2beta1 does not have this field.

Besides that, as far as I can tell, they support the same features. I'll explain what I think the motivation for the new api version was:

autoscaling/v2beta2 was released in Kubernetes version 1.12 and the release notes state:

  • We released autoscaling/v2beta2, which cleans up and unifies the API

The "cleans up and unifies the API" is probably referring to that fact that v2beta2 uses the same MetricIdentifier and MetricTarget api specs for everything under spec.metrics, which are the external, object, pods, and resource fields.

In v2beta1, those fields have pretty different specs, making it (in my opinion) more difficult to figure out how to use.

Kubernetes release notes about the new behavior field:

https://kubernetes.io/docs/setup/release/notes/#new-api-fields

Kubernetes 1.12 reference on the v2beta1 fields (click each one to see how different they are):

https://v1-16.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#metricspec-v2beta1-autoscaling

Kubernetes 1.12 reference on the v2beta2 fields (click each one to see how they've been "cleaned up"):

https://v1-16.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#metricspec-v2beta2-autoscaling

查看更多
Evening l夕情丶
5楼-- · 2020-08-23 10:05

In case you need to drive the horizontal pod autoscaler with a custom external metric, and only v2beta1 is available to you (I think this is true of GKE still), we do this routinely in GKE. You need:

  1. A stackdriver monitoring metric, possibly one you create yourself,
  2. If the metric isn't derived from sampling Stackdriver logs, a way to publish data to the stackdriver monitoring metric, such as a cronjob that runs no more than once per minute (we use a little python script and Google's python library for monitoring_v3), and
  3. A custom metrics adapter to expose Stackdriver monitoring to the HPA (e.g., in Google, gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0). There's a tutorial on how to deploy this adapter here. You'll need to ensure that you grant the required RBAC stuff to the service account running the adapter, as shown here. You may or may not want to grant the principal that deploys the configuration cluster-admin role as described in the tutorial; we use Helm 2 w/ Tiller and are careful to grant least privilege to Tiller to deploy.

Configure your HPA this way:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
 ...
spec:
   scaleTargetRef:
      kind: e.g., StatefulSet
      name: name-of-pod-to-scale
      apiVersion: e.g., apps/v1
   minReplicas: 1
   maxReplicas: ...
   metrics:
     type: External
     external: 
       metricName: "custom.googleapis.com|your_metric_name"
       metricSelector:
          matchLabels:
             resource.type: "generic_task"
             resource.labels.job: ...
             resource.labels.namespace: ...
             resource.labels.project_id: ...
             resourcel.labels.task_id: ...
       targetValue: e.g., 0.7 (i.e., if you publish a metric that measures the ratio between demand and current capacity)

If you ask kubectl for your HPA object, you won't see autoscaling/v2beta1 settings, but this works well:

kubectl get --raw /apis/autoscaling/v2beta1/namespaces/your-namespace/horizontalpodautoscalers/your-autoscaler | jq

So far, we've only exercised this on GKE. It's clearly Stackdriver-specific. To the extent that Stackdriver can be deployed on other public managed k8s platforms, it might actually be portable. Or you might end up with a different way to publish a custom metric for each platform, using a different metrics publishing library in your cronjob, and a different custom metrics adapter. We know that one exists for Azure, for example.

查看更多
登录 后发表回答