I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB
and now I'd like to expand that to 100GB
.
I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim?
No, Kubernetes does not support automatic volume resizing yet.
Disk resizing is an entirely manual process at the moment.
Assuming that you created a Kubernetes PV object with a given capacity and the PV is bound to a PVC, and then attached/mounted to a node for use by a pod. If you increase the volume size, pods would continue to be able to use the disk without issue, however they would not have access to the additional space.
To enable the additional space on the volume, you must manually resize the partitions. You can do that by following the instructions here. You'd have to delete the pods referencing the volume first, wait for it to detach, than manually attach/mount the volume to some VM instance you have access to, and run through the required steps to resize it.
Opened issue #35941 to track the feature request.
There is some support for this in 1.8 and above, for some volume types, including
gcePersistentDisk
andawsBlockStore
, if certain experimental features are enabled on the cluster.For other volume types, it must be done manually for now. In addition, support for doing this automatically while pods are online (nice!) is coming in a future version (currently slated for 1.11):
For now, these are the steps I followed to do this manually with an
AzureDisk
volume type (for managed disks) which currently does not support persistent disk resize (but support is coming for this too):Scale
to do one pod at a time.e2fsck
andresize2fs
to resize the filesystem on the PV (assuming an ext3/4 FS). Unmount the disks.Released
.Available
:spec.capacity.storage
,spec.claimref
uid
andresourceVersion
fields, andstatus.phase
.metadata.resourceVersion
field,pv.kubernetes.io/bind-completed
andpv.kubernetes.io/bound-by-controller
annotations, andspec.resources.requests.storage
field to the updated PV size, andstatus
.Pending
state, but both the PV and PVC should transition relatively quickly toBound
.In terms of PVC/PV 'resizing', that's still not supported in k8s, though I believe it could potentially arrive in 1.9
It's possible to achieve the same end result by dealing with PVC/PV and (e.g.) GCE PD though..
For example, I had a gitlab deployment, with a PVC and a dynamically provisioned PV via a StorageClass resource. Here are the steps I ran through:
kubectl describe pv <name-of-pv>
(useful when creating the PV manifest later)"gcePersistentDisk: pdName: <name-of-pd>"
defined, along with other details that I'd grabbed at step 3. make sure you update the spec.capacity.storage to the new capacity you want the PV to have (although not essential, and has no effect here, you may want to update the storage capacity/value in your PVC manifest, for posterity)kubectl apply
(or equivalent) to recreate your deployment/pod, PVC and PVnote: some steps may not be essential, such as deleting some of the existing deployment/pod.. resources, though I personally prefer to remove them, seeing as I know the ReclaimPolicy is Retain, and I have a snapshot.
It is possible in Kubernetes 1.9 (alpha in 1.8) for some volume types: gcePersistentDisk, awsElasticBlockStore, Cinder, glusterfs, rbd
It requires enabling the
PersistentVolumeClaimResize
admission plug-in and storage classes whoseallowVolumeExpansion
field is set to true.See official docs at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size:
PVC
size usingkubectl edit pvc $your_pvc
Once the pod using the volume is terminated, the filesystem is expanded and the size of the
PV
is increased. See the above link for details.Yes, it can be, after version 1.8, have a look at volume expansion here