Best practices for storing kubernetes configuratio

2020-02-10 11:53发布

问题:

In several places on the Kubernetes documentation site they recommend that you store your configuration YAML files inside source control for easy version-tracking, rollback, and deployment.

My colleagues and I are currently in the process of trying to decide on the structure of our git repository.

  • We have decided that since configuration can change without any changes to the app code, that we would like to store configurations in a separate, shared repository.
  • We may need multiple versions of some components running side-by-side within a given environment (cluster). These versions may have different configurations.

There seem to be a lot of potential variations, and all of them have shortcomings. What is the accepted way to structure such a repository?

回答1:

I think that helm is going to become the standardized way to create an application installer for kubernetes clusters. I'll try to create my own chart to parametrize my app deployments.



回答2:

There is no established standard yet, I believe. I find helm's charts too complicated to start with, especially having another unmanaged component running on the k8s cluster. This is a workflow that we follow that works quite well for a setup of 15ish microservices, and 5 different environments (devx2, staging, qa, prod).

The 2 key ideas:

  1. Store kubernetes configurations in the same source repo that has the other build tooling. Eg: alongside the microservice source code which has the tooling for building/releasing that particular microservice.
  2. Template the kubernetes configuration with something like jinja and render the templates according to the environment you're targeting.

The tooling is reasonably straightforward to figure out by putting together a few bash scripts or integrating with a Makefile etc.

EDIT: to answer some of the questions in the comment

The application source code repository is used as the single source of truth. So that means that if everything works as it should, changes should never be moved from the kubernetes cluster to the repository.

Changes directly on the server are prohibited in our workflow. If it ever does happen, we have to manually make sure they enter the application repository again.

Again, just want to note that the configurations stored in the source code are actually templates and use secretKeyRef quite liberally. This means that some configurations are coming in from the CI tooling as they are rendered and some are coming in from secrets that live only on the cluster (like database passwords, API tokens etc.).



回答3:

In my opinion Helm is to kubernetes as Docker-compose is to docker

There is no reason to fear helm, as in it's most basic functionality, all it does is similar to kubectl apply -f templates.

Once you get familiar with helm you can start using values.yaml and adding values into your kubernetes templates for maximum flexibility.

values.yaml

name: my-name

inside templates/deployment.yaml

name: {{ .Values.name }}

https://helm.sh/

My approach is to create a helm subdirectory in each project, the same way that I include a docker-compose.yml file.

in addition to this, you can also maintain a helm repository for all your projects which reference images that are pre-built



回答4:

Use a separate repository to store configuration.

If you have multiple microservices to orchestrate, none of them is authoritative over the configuration- especially when you run multiple configurations in parallel, e.g. for canary testing.

Helm (https://helm.sh/) helps you propagate constants through multiple microservices' configurations. Again, this indicates that those constants / parameters are independent of any single codebase.