Kubernetes Persistent Volume Claim mounted with wr

2020-07-18 10:35发布

I'm creating a Kubernetes PVC and a Deploy that uses it.

In the yaml it is specified that uid and gid must be 1000.

But when deployed the volume is mounted with different IDs so I have no write access on it.

How can I specify effectively uid and gid for a PVC?

PVC yaml:

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jmdlcbdata
  annotations:
    pv.beta.kubernetes.io/gid: "1000"
    volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000"
    volume.beta.kubernetes.io/storage-class: default
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "2Gi"
  storageClassName: "default"

Deploy yaml:

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  name: jmdlcbempty
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: jmdlcbempty
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      volumes:
        - name: jmdlcbdata
          persistentVolumeClaim:
            claimName: jmdlcbdata  
      containers:
        - name: myalpine
          image: "alpine"
          command:
            - /bin/sh
            - "-c"
            - "sleep 60m"
          imagePullPolicy: IfNotPresent


          volumeMounts:
            - mountPath: /usr/share/logstash/data
              name: jmdlcbdata

And here is the dir list:

$ kubectl get pvc; kubectl get pods;            
NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
jmdlcbdata   Bound     pvc-6dfcdb29-8a0a-11e8-938b-1a5d4ff12be9   20Gi       RWO            default        2m
NAME                           READY     STATUS    RESTARTS   AGE
jmdlcbempty-68cd675757-q4mll   1/1       Running   0          6s
$ kubectl exec -it jmdlcbempty-68cd675757-q4mll -- ls -ltr /usr/share/logstash/
total 4
drwxr-xr-x    2 nobody   42949672      4096 Jul 17 21:44 data

I'm working on a IBM's Bluemix cluster.

Thanks.

2条回答
淡お忘
2楼-- · 2020-07-18 11:12

After some experiments, finally, I can provide an answer.

There are several ways to run processes in a Container from specific UID and GID:

  1. runAsUser field in securityContext in a Pod definition specifies a user ID for the first process runs in Containers in the Pod.

  2. fsGroup field in securityContext in a Pod specifies what group ID is associated with all Containers in the Pod. This group ID is also associated with volumes mounted to the Pod and with any files created in these volumes.

  3. When a Pod consumes a PersistentVolume that has a pv.beta.kubernetes.io/gid annotation, the annotated GID is applied to all Containers in the Pod in the same way that GIDs specified in the Pod’s security context are.

Note, every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each Container.

Also, there are several ways to set up mount options for PersistentVolumes. A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator. Also, it can be provisioned dynamically using a StorageClass. Therefore, you can specify mount options in a PersistentVolume when you create it manually. Or you can specify them in StorageClass, and every PersistentVolume requested from that class by a PersistentVolumeClaim will have these options.

It is better to use mountOptions attribute than volume.beta.kubernetes.io/mount-options annotation and storageClassName attribute instead of volume.beta.kubernetes.io/storage-class annotation. These annotations were used instead of the attributes in the past and they are still working now, however they will become fully deprecated in a future Kubernetes release. Here is an example:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: with-permissions
provisioner: <your-provider>
parameters:
  <option-for your-provider>
reclaimPolicy: Retain
mountOptions: #these options
  - uid=1000
  - gid=1000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "2Gi"
  storageClassName: "with-permissions" #these options

Note that mount options are not validated, so mount will simply fail if one is invalid. And you can use uid=1000, gid=1000 mount options for file systems like FAT or NTFS, but not for EXT4, for example.

Referring to your configuration:

  1. In your PVC yaml volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" is not working, because it is an option for StorageClass or PV.

  2. You specified storageClassName: "default" and volume.beta.kubernetes.io/storage-class: default in your PVC yaml, but they are doing the same. Also, default StorageClass do not have mount options by default.

  3. In your PVC yaml 'pv.beta.kubernetes.io/gid: "1000"' annotation does the same as securityContext.fsGroup: 1000 option in Deployment definition, so the first is unnecessary.

Try to create a StorageClass with required mount options (uid=1000, gid=1000), and use a PVC to request a PV from it, as in the example above. After that, you need to use a Deployment definition with SecurityContext to setup access to mounted PVC. But make sure that you are using mount options available for your file system.

查看更多
看我几分像从前
3楼-- · 2020-07-18 11:20

You can use an initContainer to set the UID/GID permissions for the volume mount path.

The UID/GID that you see by default is due to root squash being enabled on NFS.

Steps: https://console.bluemix.net/docs/containers/cs_troubleshoot_storage.html#nonroot

查看更多
登录 后发表回答