Dynamic RWX volumes now supported in TKC in vSphere with Tanzu
Last week, a new release of Tanzu Kubernetes (v1.22.9) became available which allows Tanzu Kubernetes clusters deployed through the TKG Service (TKGS) on vSphere with Tanzu to support dynamic read-write-many (RWX) volumes. This now means that if vSAN File Service is available on the vSphere cluster where vSphere with Tanzu is enabled, volumes can be dynamically created which can be shared between multiple Pods. This is something that many customers have been waiting for, so I am delighted to see that it is finally available.
There is one setup step needed in vSphere with Tanzu to enable this functionality. In the vSphere UI, select the cluster where vSphere with Tanzu is enabled, select Configure, and under the Supervisor Cluster section, select Storage as shown below. Then select the option to Activate file volume support.
This will popup the following warning message regarding encryption and access control lists.
If you still wish to proceed, select the confirmation checkbox and click on Activate. File Volume Support will now be available for Tanzu Kubernetes Clusters (TKGs) deployed by the TKG Service. To use RWX volumes, clusters need to be upgrade to v1.22.9 or deployed with this new version. If your the content library is subscribed to the TKr URL, then this new version should be automatically available (assuming you have compatible vCenter and Supervisor cluster versions). In this environment I am running the following versions:
- vCenter Server 7.0U3 build 19717403
- Supervisor Cluster v1.22.6+vmware.1-vsc0.0.15-19705778
When I check the available TKrs, I see the new v1.22.9 available.
% kubectl get tkr | grep v1.22 v1.22.9---vmware.1-tkg.1.cc71bc8 1.22.9+vmware.1-tkg.1.cc71bc8 True True 6d12h
I can proceed with deploying a new TKC using the following manifest:
apiVersion: run.tanzu.vmware.com/v1alpha2 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-v1-22-9 namespace: demo-ns spec: topology: controlPlane: replicas: 1 vmClass: guaranteed-small storageClass: vsan-default-storage-policy tkr: reference: name: v1.22.9---vmware.1-tkg.1.cc71bc8 nodePools: - name: worker-pool-1 replicas: 2 vmClass: guaranteed-small storageClass: vsan-default-storage-policy tkr: reference: name: v1.22.9---vmware.1-tkg.1.cc71bc8
Once the new cluster is deployed, I can switch context to it, and attempt to create a new RWX PVC. Below is the manifest for the volume. I also added a label so it is easy to identify in the vSphere Client.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: file-pvc-cor
labels:
app: rwx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
storageClassName: vsan-default-storage-policy
% kubectl apply -f vsan-fs-pvc.yaml
persistentvolumeclaim/file-pvc-cor created
% kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
file-pvc-cor Bound pvc-b04c4fc4-9d5b-4ced-aa38-feb9c8da7eb7 50Gi RWX vsan-default-storage-policy 12s
% kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b04c4fc4-9d5b-4ced-aa38-feb9c8da7eb7 50Gi RWX Delete Bound default/file-pvc-cor vsan-default-storage-policy 10s
The read-write-many (RWX) volume has been created successfully on the TKC in vSphere with Tanzu. Pods in the TKC can share access to this volume. Since this is using the CSI-CNS, the volume can be examined from the vSphere Client. I can filter the volumes listing based on the label added to the PVC manifest earlier.
And since this is built on a vSAN File Service file share, that can also be queried by clicking on the View File Shares option.
It is great news that the vSAN File Service can now provide dynamic RWX volumes to applications deployed in Tanzu Kubernetes clusters on vSphere with Tanzu.
Does VMware have plans to make RWX PVC support on traditional datastore (VMFS)? What is the limitation that is preventing this?
The block volume (PV/VMDK) would need to be formatted with a clustered file system by the CSI driver. At present, it only formats block volumes with ext4 or xfs or raw volumes iirc.
is there a deman in cloud native workloads for RWS except Dev workloads, I do not see there is a demand, what is the use case? and if it is NFS backend, would not that bring a whole can of worms?
Presumably the main use case is for applications that wish to share access to data. NFS has been around a long time, and is a tried and tested technology. I’m not sure it introduces any more complexity when compared to other storage solutions.