VCF 9.0 Volume Service – Consuming static RWX volumes via VKS
Following on from my previous post on this topic, a number of people reached out to ask about how to add read-write-many (RWX) volumes to a Pod in VKS. Again, for dynamic volumes, this is quite simple to do. But what about some static volumes which were initially created by the Volume Service. This is a summary of what I posted in my previous blog in relation to RWX volumes. “Since RWX volumes are back by vSAN File Shares in VCF 9.0, you will need to have vSAN File Service enabled and configured. You will also have to tell the Supervisor that you wish to consume vSAN File Service by enabling File Volume Support. Once those requirements are met, create the necessary YAML manifests, setting the accessMode to ReadWriteMany in the specifications. It should then be possibles for multiple Pods in a VKS cluster to access the same volume created by the Volume Service.” Let’s see how to do that.
Use cases for RWX volumes
You might ask why are RWX “file” volumes needed. Aren’t block based read-write-once (RWO) volumes enough? Here are some common use cases for ReadWriteMany volumes, as published in our vSphere CSI driver documentation:
- Shared File Systems and Collaboration Tools: Applications like content management systems, shared document repositories, or collaborative editing tools where multiple users or processes need concurrent read and write access to a common dataset.
- Distributed Caching: Shared caches used by multiple application instances to improve performance by storing frequently accessed data in a central location accessible to all.
- Centralized Logging and Monitoring Data: Storing logs or monitoring metrics from various application components in a central location where multiple tools or services might need to write data and multiple analysis tools need to read from it.
- Machine Learning Workloads: Where large datasets need to be accessed and potentially updated by multiple training or inference pods concurrently.
- Legacy Applications: When migrating older applications that rely heavily on shared file systems for data storage and cannot easily be refactored to use distributed databases or object storage.
- Clustered Applications: Certain clustered applications that require a shared file system for configuration, state, or data synchronization across multiple instances.
Setup
Let’s go through those setup steps first. As mentioned, File Service must be enabled on vSAN.
And then verify that the Supervisor has File Volume Support enabled:
Create RWX Volume via Volume Service
The next step is almost identical to the steps in the previous blog post. In that post, a read-write-once (RWO) block volume was created. Now we are creating a read-write-many (RWX) file volume. Here it is in the Volume Service view in VCF Automation, where the new file volume (RWX) is shown alongside the block volume (RWO) which was used in the previous demo. These volumes are currently present in the Supervisor.
This volume (name: volume-claim-share) is also visible as a File Share in vSAN File Service, and can be observed in the vSphere Client.
Create the PV, PVC & Pod YAML Manifests
Now we can go ahead and build the YAML manifests to create objects on the VKS cluster, the vSphere Kubernetes Service cluster. Once again, there is a PV manifest and a PVC manifest for the volume, but this time there are two Pod manifests to show that both Pods can share access to this RWX volume. The PV entry spec.csi.volumeHandle must include the PVC name retrieved from either the Supervisor or the VCFA Volume Service, i.e. “volume-claim-share”. Set the spec.accessMode to ReadWriteMany and the spec.capacity.storage to match the size of the volume.
PV
apiVersion: v1 kind: PersistentVolume metadata: name: static-vks-file-pv annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com spec: storageClassName: vsan-no-spaces capacity: storage: 8Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete claimRef: namespace: default name: static-vks-file-pvc csi: driver: "csi.vsphere.vmware.com" volumeAttributes: type: "vSphere CNS File Volume" volumeHandle: "volume-claim-share"
PVC
For the PVC manifest, again ensure that the spec.accessMode to set to ReadWriteMany and that the spec.capacity.storage is also set to match the size of the volume. The spec.volumeName should be set to the name of the PV in the previous manifest.
apiVersion: v1 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: static-vks-file-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 8Gi storageClassName: vsan-no-spaces volumeName: static-vks-file-pv
Pods
Finally, create the manifests for the two Pods so that we can confirm that the volume can be accessed by both successfully. Include a volume requirement which matches the PVC in spec.volumes.persistentVolumeClaim.claimName. As before, the Pods have very simply busybox containers, but a number of spec.containers.securityContext settings are required for the Pod to successfully start on a VKS cluster.
Pod 1
apiVersion: v1 kind: Pod metadata: name: file-pod-1 spec: containers: - name: busybox image: "registry.k8s.io/busybox:latest" volumeMounts: - name: file-vol mountPath: "/demo" command: [ "sleep", "1000000" ] securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true runAsUser: 1000 seccompProfile: type: "RuntimeDefault" volumes: - name: file-vol persistentVolumeClaim: claimName: static-vks-file-pvc
Pod 2
apiVersion: v1 kind: Pod metadata: name: file-pod-2 spec: containers: - name: busybox image: "registry.k8s.io/busybox:latest" volumeMounts: - name: file-vol mountPath: "/demo" command: [ "sleep", "1000000" ] securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true runAsUser: 1000 seccompProfile: type: "RuntimeDefault" volumes: - name: file-vol persistentVolumeClaim: claimName: static-vks-file-pvc
Test RWX volume access
Now create the PV, PVC and the first Pod with RWX volume access in the VKS cluster, using kubectl to apply the manifests above. The kubectl get outputs shown here also capture the PV, PVC and Pod from the previous blog post/test as well.
After creating the first Pod, we can open a shell to it using kubectl exec. Notice how in this case, the RWX volume mounted to the Pod (observed using df command) is a vSAN File Service file share, as per the screenshot captured earlier (10.13.10.191:/52ed4855-d1d8-39b1-66ba-2bd39172258f).
$ kubectl get pvc,pv,pod NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/static-vks-block-pvc Bound static-vks-block-pv 7Gi RWO vsan-no-spaces <unset> 35m persistentvolumeclaim/static-vks-file-pvc Bound static-vks-file-pv 8Gi RWX vsan-no-spaces <unset> 3m24s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/static-vks-block-pv 7Gi RWO Delete Bound default/static-vks-block-pvc vsan-no-spaces <unset> 35m persistentvolume/static-vks-file-pv 8Gi RWX Delete Bound default/static-vks-file-pvc vsan-no-spaces <unset> 3m38s NAME READY STATUS RESTARTS AGE pod/block-pod 1/1 Running 0 26m pod/file-pod-1 1/1 Running 0 46s $ kubectl exec -it file-pod-1 -- "/bin/sh" / $ df Filesystem 1K-blocks Used Available Use% Mounted on overlay 20450876 6920160 12466528 36% / tmpfs 65536 0 65536 0% /dev 10.13.10.191:/52ed4855-d1d8-39b1-66ba-2bd39172258f 8388608 0 8286208 0% /demo /dev/sda3 20450876 6920160 12466528 36% /etc/hosts /dev/sda3 20450876 6920160 12466528 36% /dev/termination-log /dev/sda3 20450876 6920160 12466528 36% /etc/hostname /dev/sda3 20450876 6920160 12466528 36% /tmp/resolv.conf shm 65536 0 65536 0% /dev/shm tmpfs 13584916 12 13584904 0% /tmp/secrets/kubernetes.io/serviceaccount tmpfs 8186120 0 8186120 0% /proc/acpi tmpfs 65536 0 65536 0% /proc/kcore tmpfs 65536 0 65536 0% /proc/keys tmpfs 65536 0 65536 0% /proc/latency_stats tmpfs 65536 0 65536 0% /proc/timer_list tmpfs 8186120 0 8186120 0% /proc/scsi tmpfs 8186120 0 8186120 0% /sys/firmware / $ / $ cd /demo /demo $ echo "hello from file pod 1" > pod1.txt /demo $ cat pod1.txt hello from file pod 1 /demo $ exit
Let’s now try to access the same RWX volume from the other Pod. First create the new Pod, and exec into as before and ensure we can access the same volume as first Pod.
$ kubectl apply -f file-pod-2.yaml pod/file-pod-2 created $ kubectl get pvc,pv,pod NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/static-vks-block-pvc Bound static-vks-block-pv 7Gi RWO vsan-no-spaces <unset> 38m persistentvolumeclaim/static-vks-file-pvc Bound static-vks-file-pv 8Gi RWX vsan-no-spaces <unset> 6m14s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE persistentvolume/static-vks-block-pv 7Gi RWO Delete Bound default/static-vks-block-pvc vsan-no-spaces <unset> 38m persistentvolume/static-vks-file-pv 8Gi RWX Delete Bound default/static-vks-file-pvc vsan-no-spaces <unset> 6m28s NAME READY STATUS RESTARTS AGE pod/block-pod 1/1 Running 0 29m pod/file-pod-1 1/1 Running 0 3m36s pod/file-pod-2 1/1 Running 0 6s $ kubectl exec -it file-pod-2 -- "/bin/sh" / $ cd /demo /demo $ ls pod1.txt /demo $ cat pod1.txt hello from file pod 1 /demo $ echo "hello from file pod 2" > pod2.txt /demo $ ls pod1.txt pod2.txt /demo $ cat pod2.txt hello from file pod 2 /demo $
Success. Both Pods can read and write to the same RWM volume. That concludes the post, and hopefully you can now appreciate that RWX volumes built via the Volume Service in VCF Automation can be consumed via modern workloads deployed on the vSphere Kubernetes Service clusters.