I am trying to access the Kubernetes API directly without running kubectl -proxy
.
But when I use the token of the serviceaccount default, I get a 403.
Even after creating a ClusterRole and ClusterRoleBinding for this serviceaccount, the request is rejected with 403.
The configuration I applied looks like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
(It is nearly the one from the docs on kubernetes io, just used the ServiceAccount as Subject and changed the resource to pods)
Then I applied the config and tried to access the pods via curl:
$ kubectl apply -f secrets.yaml
clusterrole "pod-reader" created
clusterrolebinding "pod-reader" created
$ curl https://192.168.1.31:6443/v1/api/namespaces/default/pods --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:serviceaccount:default:default\" cannot get path \"/v1/api/namespaces/default/pods\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
I guess the error message shows that the authentication part is ok, because it looks like the request was correctly identified as coming from the serviceaccount default:default. But what do I have to do to entitle this (or another service account) to access information about the pods or nodes?
I see this error when calling curl from outside a Pod, but also if I - for example - use the kubernetes java client to access the API from within a Pod using the secret mounted under /var/run/secrets.
I am a K8s newbie, so please forgive me if this is a stupid question.
Regarding the configuration: I have K8s 1.8 running on a cluster of Raspberry Pis with one Master and two Worker Nodes. I didn't pass much to kubeadm init, so I guess it should have the default configuration. FWIW kubectl describe shows this command for the apiserver:
kube-apiserver
--requestheader-group-headers=X-Remote-Group
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
--secure-port=6443
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--advertise-address=192.168.1.31
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--enable-bootstrap-token-auth=true
--requestheader-username-headers=X-Remote-User
--allow-privileged=true
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-allowed-names=front-proxy-client
--client-ca-file=/etc/kubernetes/pki/ca.crt
--insecure-port=0
--authorization-mode=Node,RBAC
--etcd-servers=http://127.0.0.1:2379
I think you have a little issue in your curl path, it should be
/api/v1/namespaces/...
and not/v1/api/namespaces/...
. See e.g. https://kubernetes.io/docs/api-reference/v1.8/#list-62