i setup my kubernetes cluster using kops, and I did so from local machine. So my .kube
directory is stored on my local machine, but i setup kops
for state storage in s3
.
Im in the process of setting up my CI server now, and I want to run my kubectl
commands from that box. How do i go about importing the existing state to that server?
To run
kubectl
command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in~/.kube/config
file. You may also view it viakubectl config view
command.In order to run
kubectl
on your CI server, you need to make sure the~/.kube/config
file contains all the information thatkubectl
client needs.With kops, a simple naive solution is to:
1) install kops, kubectl on your CI server
2) config the AWS access credential on your CI server (either via IAM Role or simply env vars), make sure it has access to your s3 state store path
3) set env var for kops to access your cluster:
4) Use kops export command to get the kubecfg needed for running kubectl
see https://github.com/kubernetes/kops/blob/master/docs/cli/kops_export.md
Now the
~/.kube/config
file on your CI server should contain all the informationkubectl
needs to access your cluster.Note that this will use the default admin account on your CI server. To implement a more secure CI/CD environment, you should create a service account bind to a required permission scope (a namespace or type or resources for example), and place its credential on your CI server machine.
.kube/config
is hardly a "state", it's just a client configuration, so it is possible to just take the content of it (or its part if you have more contexts localy) and use on another machine. That is unless you want to create a dedicated user (key/cert) for CI, in which case you need to create a separate credentials and if you use key/cert they'd need to be for a different certificate "subject" so the users can be recognised as different ones