VCF 9.0 Volume Service – Consuming static volumes via VKS
I have been spending some time looking at the new Volume Service in VMware Cloud Foundation (VCF) 9.0. Through VCF Automation, is is possible for tenants of VCF to provision their own volumes. These volumes can be consumed by the VM Service, something that has been a part of the Supervisor Services for many years. However, it is also possible for workloads running in VKS, the vSphere Kubernetes Service, to consume the static volumes provisioned via the Volume Service. In this post, I will show you the steps to create a static volume via the Volume Service, and then create the appropriate manifests in your VKS cluster to make the volume available to Pods running on your cluster.
You might ask why you would want a static volume to be provisioned in the first place? This is eloquently answered in our vSphere CSI driver documentation, which I will reproduce here:
- Use an existing storage device: You have provisioned a persistent storage, First Class Disk (FCD), directly in your vCenter, and want to use this FCD in your cluster.
- Make retained data available to the cluster: You have provisioned a volume with a reclaimPolicy:retain parameter in the storage class by using dynamic provisioning. You have removed the PVC, but the PV, the physical storage in vCenter, and the data still exists. You want to access the retained data from an application in your cluster.
- Share persistent storage across namespaces in the same cluster: You have provisioned a PV in a namespace of your cluster. You want to use the same storage instance for an application pod that is deployed to a different namespace in your cluster.
- Share persistent storage across clusters in the same zone: You have provisioned a PV for your cluster. To share the same persistent storage instance with other clusters in the same zone, you must manually create the PV and matching PVC in the other cluster.
Step 1: Create Volume in Volume Service
The very first step in the process is to use VCF Automation to create a volume. Here is an example of such a volume. The name of the volume is important as that is used to get the volume into VKS. Also note that this volume has a volume mode of Read-Write-Once, meaning that it will be created as a “block” volume.
Step 2: (Optional) Check the volume on the Supervisor
As highlighted, this is an optional step. But if you do have access to the Supervisor, you can check that the volume is present on the Supervisor after it has been created via the Volume Service. Note that the PVC (Persistent Volume Claim) will be associated with the Namespace where the volume from the Volume Service was created. Use the ‘-A’ option to the kubectl command to display all PVCs.
# kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE cjh-ns-01-jn88b volume-claim-cjh Bound pvc-0d16185f-626b-4788-a55e-4d3e48245226 7Gi RWO vsan-no-spaces <unset> 5s
Step 3: Build YAML manifests for PVC, PV and Pod
Next, create the following manifests in YAML, and apply them to the VKS cluster. The PV entry spec.csi.volumeHandle must include the PVC name retrieved from either the Supervisor or the VCFA Volume Service, i.e. “volume-claim-cjh”. Set the spec.accessMode to ReadWriteOnce and the spec.capacity.storage to match the size of the volume.
PV
apiVersion: v1 kind: PersistentVolume metadata: name: static-vks-block-pv annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com spec: storageClassName: vsan-no-spaces capacity: storage: 7Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete claimRef: namespace: default name: static-vks-block-pvc csi: driver: "csi.vsphere.vmware.com" volumeAttributes: type: "vSphere CNS Block Volume" volumeHandle: "volume-claim-cjh"
PVC
For the PVC manifest, again ensure that the spec.accessMode to set to ReadWriteOnce and that the spec.capacity.storage is also set to match the size of the volume. The spec.volumeName should be set to the name of the PV.
apiVersion: v1 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: static-vks-block-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 7Gi storageClassName: vsan-no-spaces volumeName: static-vks-block-pv
Pod
Finally, create a Pod so that we can confirm that the volume can be accessed successfully. Include a volume requirement which matches the PVC in spec. volumes.persistentVolumeClaim.claimName. This Pod has a very simply busybox container, but a number of spec.containers.securityContext settings are required for the Pod to successfully start on a VKS cluster.
apiVersion: v1 kind: Pod metadata: name: block-pod spec: containers: - name: busybox image: "registry.k8s.io/busybox:latest" volumeMounts: - name: block-vol mountPath: "/demo" command: [ "sleep", "1000000" ] securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true runAsUser: 1000 seccompProfile: type: "RuntimeDefault" volumes: - name: block-vol persistentVolumeClaim: claimName: static-vks-block-pvc
Step 4: Verify Pod has access to volume
After applying these YAML manifests to the VKS cluster API server using kubectl, and assuming they create successfully, we can now verify that the volume is available in the Pod.
$ kubectl get pvc,pv,pod NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/static-vks-block-pvc Bound static-vks-block-pv 7Gi RWO vsan-no-spaces <unset> 20m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/static-vks-block-pv 7Gi RWO Delete Bound default/static-vks-block-pvc vsan-no-spaces <unset> 20m NAME READY STATUS RESTARTS AGE pod/block-pod 1/1 Running 0 11m $ kubectl exec -it block-pod -- "/bin/sh" / $ df -h Filesystem Size Used Available Use% Mounted on overlay 19.5G 6.6G 11.9G 36% / tmpfs 64.0M 0 64.0M 0% /dev /dev/sdb 6.8G 24.0K 6.4G 0% /demo /dev/sda3 19.5G 6.6G 11.9G 36% /etc/hosts /dev/sda3 19.5G 6.6G 11.9G 36% /dev/termination-log /dev/sda3 19.5G 6.6G 11.9G 36% /etc/hostname /dev/sda3 19.5G 6.6G 11.9G 36% /tmp/resolv.conf shm 64.0M 0 64.0M 0% /dev/shm tmpfs 13.0G 12.0K 13.0G 0% /tmp/secrets/kubernetes.io/serviceaccount tmpfs 7.8G 0 7.8G 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/latency_stats tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 7.8G 0 7.8G 0% /proc/scsi tmpfs 7.8G 0 7.8G 0% /sys/firmware / $
This looks good. We have successfully provisioned a volume via the Volume Service in VCF Automation, and have been able to consume that volume by assigning it to a Pod in a VKS cluster. Great!
ReadWriteMany (RWX) Volumes
Now, you can also use the Volume Service in VCF Automation to provision a Read-Write-Many volume which can be shared by multiple Pods in a VKS cluster. Read-Write-Many volumes have many use-cases. I was able to find these by doing a quick google search.
- Shared File Systems and Collaboration Tools: Applications like content management systems, shared document repositories, or collaborative editing tools where multiple users or processes need concurrent read and write access to a common dataset.
- Distributed Caching: Shared caches used by multiple application instances to improve performance by storing frequently accessed data in a central location accessible to all.
- Centralized Logging and Monitoring Data: Storing logs or monitoring metrics from various application components in a central location where multiple tools or services might need to write data and multiple analysis tools need to read from it.
- Machine Learning Workloads: Where large datasets need to be accessed and potentially updated by multiple training or inference pods concurrently.
- Legacy Applications: When migrating older applications that rely heavily on shared file systems for data storage and cannot easily be refactored to use distributed databases or object storage.
- Clustered Applications: Certain clustered applications that require a shared file system for configuration, state, or data synchronization across multiple instances.
The one thing to be aware of is that RWX volumes created by the Volume Service in VCFA can only be consumed by Pods in VKS currently. It cannot be consumed by VMs created via the VM Service at this time. Note also that since RWX volumes are back by vSAN File Shares in VCF 9.0, you will need to have vSAN File Service enabled and configured in the Supervisor. You will also have to tell the Supervisor that you wish to consume vSAN File Services. Once those requirements are met, follow the steps outlined above, changing the accessMode from ReadWriteOnce to ReadWriteMany in the specifications. It should then be possibles for multiple Pods in a VKS cluster to access the same volume created by the Volume Service.
Thank you for reading this far. As always, reach out if there are any questions regarding this blog post.
One Reply to “VCF 9.0 Volume Service – Consuming static volumes via VKS”