Can a PVC be bound to a specific PV?

2020-01-28 05:40发布

This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195:

Allowing users to ask for a specific PV breaks the separation between them

I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason.

How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs?

We use NFS, and PersistentVolume is a handy abstraction because we can keep the server IP and the path there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path reuse.

Can set volumeName in a PVC spec block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference.

标签: kubernetes
6条回答
兄弟一词,经得起流年.
2楼-- · 2020-01-28 05:42

Yes, you can actually provide the volumeName in the PVC. It will bind exactly to that PV name provided in the volumeName (also spec should be in sync)

查看更多
聊天终结者
3楼-- · 2020-01-28 05:45

There is a way to pre-bind PVs to PVCs today, here is an example showing how:

1) Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create:

$ kubectl create -f pv.yaml
persistentvolume "pv0003" created

where pv.yaml contains:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  claimRef:
    namespace: default
    name: myclaim
  nfs:
    path: /tmp
    server: 172.17.0.2

2) Then create the PVC with the same name:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

3) The PV and PVC should be bound immediately:

$ kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
myclaim   Bound     pv0003    5Gi        RWO           4s
$ ./cluster/kubectl.sh get pv
NAME      CAPACITY   ACCESSMODES   STATUS    CLAIM             REASON    AGE
pv0003    5Gi        RWO           Bound     default/myclaim             57s

We are also planning on introducing "Volume Selectors", which will enable users to select specific storage based on some implementation specific characteristics (specific rack, for example, or in your case, a way to enforce 1:1 PV to PVC mapping).

See https://github.com/kubernetes/kubernetes/issues/18333.

查看更多
爷的心禁止访问
4楼-- · 2020-01-28 05:52

Now we can use storageClassName (at least from kubernetes 1.7.x)

See detail https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage

Copied sample code here as well

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
查看更多
\"骚年 ilove
5楼-- · 2020-01-28 05:58

Better to specify both volumeName in pvc and claimRef in pvc.

By using storageClassName: manual in both pv and pvc we can bind each other, but it does not guarantee if there are many manual pv and pvc's.

Specifying a volumeName in your PVC does not prevent a different PVC from binding to the specified PV before yours does. Your claim will remain Pending until the PV is Available.

Specifying a claimRef in a PV does not prevent the specified PVC from being bound to a different PV. The PVC is free to choose another PV to bind to according to the normal binding process. Therefore, to avoid these scenarios and ensure your claim gets bound to the volume you want, you must ensure that both volumeName and claimRef are specified.

You can tell that your setting of volumeName and/or claimRef influenced the matching and binding process by inspecting a Bound PV and PVC pair for the pv.kubernetes.io/bound-by-controller annotation. The PVs and PVCs where you set the volumeName and/or claimRef yourself will have no such annotation, but ordinary PVs and PVCs will have it set to "yes".

When a PV has its claimRef set to some PVC name and namespace, and is reclaimed according to a Retain reclaim policy, its claimRef will remain set to the same PVC name and namespace even if the PVC or the whole namespace no longer exists.

source: https://docs.openshift.com/container-platform/3.11/dev_guide/persistent_volumes.html

查看更多
女痞
6楼-- · 2020-01-28 06:04

I don't think @jayme's edit to the original answer is forward compatible.

Though only documented as proposal, label selectors in PVCs seem to work with Kubernetes 1.3.0.

I've written an example that defines two volumes that are identical except in labels. Both would satisfy any of the claims, but when claims specify

selector:
    matchLabels:
      id: test2

it is evident that one of the dependent pods won't start, and the test1 PV stays unbound.

Can be tested in for example minikube with:

$ kubectl create -f volumetest.yml
$ sleep 5
$ kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
volumetest1                       1/1       Running   0          8m
volumetest1-conflict              0/1       Pending   0          8m
$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   STATUS      CLAIM          REASON    AGE
pv1       1Gi        RWO           Available                            8m
pv2       1Gi        RWO           Bound       default/test             8m
查看更多
地球回转人心会变
7楼-- · 2020-01-28 06:07

It can be done using the keyword volumeName:

for example

apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
  name: "claimapp80"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "10Gi"
  volumeName: "app080"

will claim specific PV app080

查看更多
登录 后发表回答