Kubernetes namespace default service account

2020-05-24 03:11发布

问题:

If not specified otherwise, the pod is run with default service account in the namespace , how can I check what the default service account is authorized to do , and do we need it to be mounted there with every pod, if not, how can we disable this behavior on the namespace level or cluster level.

Still searching the documentation though.

Environment: Kubernetes 1.12 , with RBAC

What other use cases the default service account should be handling? Can/should we use it as a service account to create and manage the k8s deployments in a namsepace? , for example we will not use real user accounts to create things in the cluster becuase users come and leave in team/org.

回答1:

  1. A default Serviceaccount is automatically created for each namespace ,every namesapce will have a default sa

kubectl get sa

NAME SECRETS AGE

default 1 1d

  1. serviceccounts can be added when required. Each pod is associated with exactly one serviceAccount but multiple pods can use the same serviceaccount.

  2. a pod can only use a serviceaccount from the same namespace.

  3. You can assign a serviceaccount to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default serviceaccount in the namespace

  4. The default permissions for a ServiceAccount dont allow it to list or modify any resources. The default Service- Account isnt allowed to view cluster state let alone modify it in any way

  5. By default, the default serviceAccount in a namespace has no permissions other than those of an unauthenticated user.

  6. Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.

kubectl exec -it test -n foo sh / # curl localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {

}, "status": "Failure", "message": "services is forbidden: User \"system:serviceaccount:foo:default\" cannot list resource \"services\" in API group \"\" in the namespace \"foo\"", "reason": "Forbidden", "details": { "kind": "services" }, "code": 403

as can be seen above the default service account cannot list services

but when given proper role and role binding like below

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: foo-role
  namespace: foo
rules:
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: test-foo
  namespace: foo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: foo-role
subjects:
- kind: ServiceAccount
  name: default
  namespace: foo

now i am able to list the resurce service

kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
  "kind": "ServiceList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/bar/services",
    "resourceVersion": "457324"
  },
  "items": []
  1. giving all your serviceAccounts the clusteradmin clusterrole is a bad idea its best to give everyone only the permissions they need to do their job and not a single permission more

  2. It’s a good idea to create a specific serviceAccount for each pod and then associate it with a tailor-made role or a clusterrole through a rolebinding

  3. If one of your pods only needs to read pods while the other also needs to modify them then create two different serviceaccounts and make those pods use them by specifying the serviceaccountName property in the pod spec

you can refer the below link for indepth explaination

Service account example with roles

You can check

kubectl explain serviceaccount.automountServiceAccountToken and edit the service account

kubectl edit serviceaccount default -o yaml

apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
  creationTimestamp: 2018-10-14T08:26:37Z
  name: default
  namespace: default
  resourceVersion: "459688"
  selfLink: /api/v1/namespaces/default/serviceaccounts/default
  uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4

once this change is done whichever pod you spawn doesnt have a serviceaccount token as can be seen below.

kubectl exec tp -it bash
root@tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory


回答2:

An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.

What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.

The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.

According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".

HTH