I created the following persistent volume by calling
kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-monitoring-static-content
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/some/path"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-monitoring-static-content-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 100Mi
After this I tried to delete the pvc. But this command stuck.
when calling kubectl describe pvc pv-monitoring-static-content-claim
I get the following result
Name: pv-monitoring-static-content-claim
Namespace: default
StorageClass:
Status: Terminating (lasts 5m)
Volume: pv-monitoring-static-content
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [foregroundDeletion]
Capacity: 100Mi
Access Modes: RWO
Events: <none>
And for kubectl describe pv pv-monitoring-static-content
Name: pv-monitoring-static-content
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pv-protection foregroundDeletion]
StorageClass:
Status: Terminating (lasts 16m)
Claim: default/pv-monitoring-static-content-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 100Mi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /some/path
HostPathType:
Events: <none>
There is no pod running that uses the persistent volume. Could anybody give me a hint why the pvc and the pv are not deleted?
This happens when persistent volume is protected. You should be able to cross verify this:
Command:
kubectl describe pvc PVC_NAME | grep Finalizers
Output:
Finalizers: [kubernetes.io/pvc-protection]
You can fix this by setting finalizers to null using kubectl patch
:
kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge
Ref; Storage Object in Use Protection
I'm not sure why this happened, but after deleting the finalizers of the pv and the pvc via the kubernetes dashboard, both were deleted.
This happened again after repeating the steps I described in my question.
Seems like a bug.
The PV is protected. Delete the PV before deleting the PVC. Also, delete any pods/ deployments which are claiming any of the referenced PVCs. For further information do check out Storage Object in Use Protection
You can get rid of editing your pvc! Remove pvc protection.
- kubectl edit pvc YOUR_PVC -n NAME_SPACE
- Manually edit and put # before this line
- All pv and pvc will be deleted
If PV still exists it may be because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone. From the docs:
PersistentVolumes can have various reclaim policies, including
“Retain”, “Recycle”, and “Delete”. For dynamically provisioned
PersistentVolumes, the default reclaim policy is “Delete”. This means
that a dynamically provisioned volume is automatically deleted when a
user deletes the corresponding PersistentVolumeClaim. This automatic
behavior might be inappropriate if the volume contains precious data.
In that case, it is more appropriate to use the “Retain” policy. With
the “Retain” policy, if a user deletes a PersistentVolumeClaim, the
corresponding PersistentVolume is not be deleted. Instead, it is moved
to the Released phase, where all of its data can be manually recovered
Just met this issue hours ago.
I deleted deployments that used this references and the PV/PVCs are automatically terminated.
In my case, as long as I delete the pod associated to both pv
and pvc
, the pv
and pvc
in terminating status are gone
in my case a pvc was not deleted because missing namespace (I deleted the namespace before deleting all resources/pvc)
solution : create namespace with the same name as it was before and then I was able to remove the finalizers
and finally pvc