I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl
from my local machine.
I can view the current config with kubectl config view
as well as directly access the stored state at ~/.kube/config
, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data
?
You say :
But according to the documentation
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
for Dex there are a few open source cli clients as follows:
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse
serviceaccounts
- with 2 options for specialised Policies to control access. (see below)NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user
Alice
Get related secret
Get
ca.crt
from secret (using OSXbase64
with-D
flag for decode)Get service account token from secret
Get information from your kubectl config (current-context, server..)
On a fresh machine, follow these steps (given the
ca.cert
and$endpoint
information retrieved above:Install
kubectl
Set cluster (run in directory where
ca.crt
is stored)Set user credentials
Define the combination of alice user with the staging cluster
Switch current-context to
alice-staging
for the userTo control user access with policies (using ABAC), you need to create a
policy
file (for example):Provision this
policy.json
on every master node and add--authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json
flags to API serversThis would allow Alice (through her service account) read only rights to all resources in default namespace only.
bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC. https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/