This was discussed by k8s maintainers in https://github.com/kubernetes/kubernetes/issues/7438#issuecomment-97148195:
Allowing users to ask for a specific PV breaks the separation between them
I don't buy that. We allow users to choose a node. It's not the common case, but it exists for a reason.
How did it end? What's the intended way to have >1 PV's and PVC's like the one in https://github.com/kubernetes/kubernetes/tree/master/examples/nfs?
We use NFS, and PersistentVolume is a handy abstraction because we can keep the server
IP and the path
there. But a PersistentVolumeClaim gets any PV with sufficient size, preventing path
reuse.
Can set volumeName
in a PVC spec
block (see https://github.com/kubernetes/kubernetes/pull/7529) but it makes no difference.
Yes, you can actually provide the
volumeName
in the PVC. It will bind exactly to that PV name provided in the volumeName (also spec should be in sync)There is a way to pre-bind PVs to PVCs today, here is an example showing how:
1) Create a PV object with a ClaimRef field referencing a PVC that you will subsequently create:
where
pv.yaml
contains:2) Then create the PVC with the same name:
3) The PV and PVC should be bound immediately:
We are also planning on introducing "Volume Selectors", which will enable users to select specific storage based on some implementation specific characteristics (specific rack, for example, or in your case, a way to enforce 1:1 PV to PVC mapping).
See https://github.com/kubernetes/kubernetes/issues/18333.
Now we can use
storageClassName
(at least from kubernetes 1.7.x)See detail https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage
Copied sample code here as well
Better to specify both
volumeName
inpvc
andclaimRef
inpvc
.By using
storageClassName: manual
in bothpv
andpvc
we can bind each other, but it does not guarantee if there are manymanual
pv and pvc's.source: https://docs.openshift.com/container-platform/3.11/dev_guide/persistent_volumes.html
I don't think @jayme's edit to the original answer is forward compatible.
Though only documented as proposal, label selectors in PVCs seem to work with Kubernetes 1.3.0.
I've written an example that defines two volumes that are identical except in
labels
. Both would satisfy any of the claims, but when claims specifyit is evident that one of the dependent pods won't start, and the test1 PV stays unbound.
Can be tested in for example minikube with:
It can be done using the keyword volumeName:
for example
will claim specific PV
app080