vSphere CSI v2.2 – Online Volume Expansion
The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done.
Requirements
Note: This feature requires vSphere 7.0 Update 2 (U2). This means that both the vCenter Server and the ESXi hosts must be running version 7.0U2. This feature won’t work on earlier versions of vSphere. Check out the requirements here.
vSphere CSI v2.2
This feature also requires vSphere CSI version 2.2. Note that there is an additional RBAC manifest for the CSI node service account in this release. Thus, there are 4 manifests in total that must be deployed for this release. These can be deployed directly from GitHub (link here), or you can download them and deploy them as follows:
$ kubectl apply -f rbac/vsphere-csi-controller-rbac.yaml -f rbac/vsphere-csi-node-rbac.yaml \ -f deploy/vsphere-csi-controller-deployment.yaml -f deploy/vsphere-csi-node-ds.yaml serviceaccount/vsphere-csi-controller created clusterrole.rbac.authorization.k8s.io/vsphere-csi-controller-role created clusterrolebinding.rbac.authorization.k8s.io/vsphere-csi-controller-binding created serviceaccount/vsphere-csi-node created role.rbac.authorization.k8s.io/vsphere-csi-node-role created rolebinding.rbac.authorization.k8s.io/vsphere-csi-node-binding created deployment.apps/vsphere-csi-controller created configmap/internal-feature-states.csi.vsphere.vmware.com created csidriver.storage.k8s.io/csi.vsphere.vmware.com created service/vsphere-csi-controller created daemonset.apps/vsphere-csi-node created
Simple Pod, PVC, StorageClass Manifests
I created some simple manifests to test the online volume expand operation. The first manifest is the Storage Class, which in my case points to a vSAN datastore. You can obviously change this if you need to use a different storage policy or datastore in your environment. The most important entry is the allowVolumeExpansion set to true.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vol-exp-sc provisioner: csi.vsphere.vmware.com allowVolumeExpansion: true parameters: storagePolicyName: "vsan-b"
The next manifest is the PVC, which is the Persistent Volume Claim. It needs to use the previously created Storage Class. The initial size will be 1GB.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vol-exp-pvc spec: storageClassName: vol-exp-sc accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
To prove that it is an online volume expansion, I will also create a simple busybox Pod, to which the volume will be attached and mounted.
apiVersion: v1 kind: Pod metadata: name: vol-exp-busybox spec: containers: - image: "k8s.gcr.io/busybox" command: - sleep - "3600" imagePullPolicy: Always name: busybox volumeMounts: - name: vol-exp mountPath: "/mnt/volume1" restartPolicy: Always volumes: - name: vol-exp persistentVolumeClaim: claimName: vol-exp-pvc readOnly: false
Deploy the sample manifests
Let’s begin by deploying these manifests, so that we can see the Storage Class, the PVC, the Persistent Volume (PV) and of course the Pod.
$ kubectl apply -f storageclass.yaml -f pvc.yaml -f pod.yaml storageclass.storage.k8s.io/vol-exp-sc created persistentvolumeclaim/vol-exp-pvc created pod/vol-exp-busybox created $ kubectl get sc,pvc,pv,pod NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/vol-exp-sc csi.vsphere.vmware.com Delete Immediate true 29s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/vol-exp-pvc Bound pvc-59eeb319-658a-4fb5-a09c-dc91bedfbc1a 1Gi RWO vol-exp-sc 29s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-59eeb319-658a-4fb5-a09c-dc91bedfbc1a 1Gi RWO Delete Bound default/vol-exp-pvc vol-exp-sc 27s NAME READY STATUS RESTARTS AGE pod/vol-exp-busybox 1/1 Running 0 29s
Looks like everything is up and running, and we can see the 1GB persistent volume. Let’s now exec onto the Pod and examine the volume from there. We can also add some files to ensure they are not impacted by the volume grow operation.
$ kubectl exec -it pod/vol-exp-busybox -- sh / # mount | grep volume1 /dev/sdd on /mnt/volume1 type ext4 (rw,relatime) / # df -h | grep volume1 /dev/sdd 975.9M 2.5M 906.2M 0% /mnt/volume1 / # cd /mnt/volume1/ /mnt/volume1 # ls lost+found /mnt/volume1 # mkdir demo-folder /mnt/volume1 # cd demo-folder/ /mnt/volume1/demo-folder # cp /etc/* . cp: omitting directory '/etc/init.d' cp: omitting directory '/etc/iproute2' cp: omitting directory '/etc/ld.so.conf.d' cp: omitting directory '/etc/network' /mnt/volume1/demo-folder # ls fstab hostname inittab issue mtab os-release profile random-seed securetty shadow group hosts inputrc ld.so.conf nsswitch.conf passwd protocols resolv.conf services
Expand the volume
In this step, I am going to grow the volume from 1GB to 2GB, online using the kubectl patch command.
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE vol-exp-pvc Bound pvc-59eeb319-658a-4fb5-a09c-dc91bedfbc1a 1Gi RWO vol-exp-sc 10m $ kubectl patch pvc vol-exp-pvc -p '{"spec": {"resources": {"requests": {"storage": "2Gi"}}}}' persistentvolumeclaim/vol-exp-pvc patched $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE vol-exp-pvc Bound pvc-59eeb319-658a-4fb5-a09c-dc91bedfbc1a 2Gi RWO vol-exp-sc 12m
It appears that the PVC capacity has successfully grown. Let’s now check the Pod to see if the new volume size is also reflected on the mount.
/mnt/volume1/demo-folder # mount | grep volume1 /dev/sdd on /mnt/volume1 type ext4 (rw,relatime) /mnt/volume1/demo-folder # df -h | grep volume1 /dev/sdd 1.9G 3.1M 1.8G 0% /mnt/volume1 /mnt/volume1/demo-folder # ls fstab hostname inittab issue mtab os-release profile random-seed securetty shadow group hosts inputrc ld.so.conf nsswitch.conf passwd protocols resolv.conf services /mnt/volume1/demo-folder # cat os-release NAME=Buildroot VERSION=2014.02 ID=buildroot VERSION_ID=2014.02 PRETTY_NAME="Buildroot 2014.02"
It appears that the volume has been successfully grown online. To finish, I created this short video to show online volume grow in action: