Should I edit the salt tar files after a Kubernete

2019-07-31 16:11发布

问题:

I've used curl -sS https://get.k8s.io | bash to create a cluster on Google Compute Engine using Kubernetes 1.2.4. This worked great. I wanted to enable ABAC authorization mode by adding a few flags to the kube-apiserver command specified in kube-apiserver pod spec.

I'm unclear if I should adjust the salt files once they're tar/gzipped. The salt file that the pod spec is generated from is here, but editing this after the cluster is stood up has a few additional requirements:

  • I have to unpack the salt tarball that the install script uploaded to Google Cloud Storage for me
  • Edit the salt files
  • Tar/gzip them back up, generate a new checksum file
  • Push these to GCS
  • Update all of the instances' kube-metadata so that SALT_TAR_HASH is now correct

It feels like I'm going down the wrong path with this, as this also will collide with upgrades.

Is there a better way to configure pods, services, etc that are baked into the install script without having to do all of this?

回答1:

The customization that is built into the install script is in the environment variables that you can set to change behavior (see cluster/gce/config-default.sh). If overriding one of these variables doesn't work (which I believe is the case for ABAC), then your only option is to manually modify the salt files.

If you are comfortable building Kubernetes from source, your easiest path would be to clone the github repository at the desired release version, modify the salt files locally, and then run make quick-release and then ./cluster/kube-up.sh. This will build a release (from source), bundle in your locally modified salt files, generate a checksum, upload the salt files to Google Cloud Storage, and then launch a cluster with the correct salt files & checksum in your cluster.

If you don't want to build from source, rather than adjusting the kube-env metadata entry on all instances, you can fix it in the instance template and then delete each instance. They will get automatically replaced by new instances which will inherit the changes you made to the instance template.

Your current mechanism won't really mess with upgrades, because upgrades create a new instance template at the new version. Any changes that you've made to the old instance template (or old nodes directly) won't be carried forward to the new nodes (for better or worse).