Site icon CormacHogan.com

TKG & vSAN File Service for RWX (Read-Write-Many) Volumes

A common question I get in relation to VMware Tanzu Kubernetes Grid  (TKG) is whether or not it supports vSAN File Service, and specifically the read-write-many (RWX) feature for container volumes. To address this question, we need to make a distinction into how TKG is being provisioned. There is the multi-cloud version of TKG, which can run on vSphere, AWS or Azure, and are deployed from a TKG manager. Then there is the embedded TKG edition where ‘workload clusters’ are deployed in Namespaces via vSphere with Tanzu / VCF with Tanzu. To answer the question about whether or not TKG can dynamically provision RWX volumes from vSAN File Services, currently only the multi-cloud version of TKG supports it. At the time of writing, the embedded version of TKG does not support this feature, since the CSI driver in the Supervisor Cluster and the pvCSI driver in the “workload / guest” clusters require updating to support this functionality.

The next section quickly demonstrates how to use RWX volumes from vSAN File Services with the multi-cloud version of TKG. Note that this is TKG v1.2. I have already deployed the management cluster as well as a workload cluster which is made up of 3 control plane nodes and 5 worker nodes. I will build a new StorageClass using a vSphere/vSAN Storage Policy called “bigstripe”, and will use this to deploy a read-write-many volume. This will instantiate the creation of a vSAN File Service file share that will be consumed as a read-write-many volume. The steps are shown below.

cormac@cormac-tkgm:~$ tkg version
Client:
        Version: v1.2.0
        Git commit: 05b233e75d6e40659247a67750b3e998c2d990a5


cormac@cormac-tkgm:~$ tkg get clusters --include-management-cluster
NAME                             NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES
my-cluster                       default     running  3/3           5/5      v1.19.1+vmware.2  <none>
tkg-mgmt-vsphere-20201130160352  tkg-system  running  3/3           1/1      v1.19.1+vmware.2  management


@cormac-tkgm:~$ tkg get credentials my-cluster
Credentials of workload cluster 'my-cluster' have been saved
You can now access the cluster by running 'kubectl config use-context my-cluster-admin@my-cluster'


cormac@cormac-tkgm:~$ kubectl config use-context my-cluster-admin@my-cluster
Switched to context "my-cluster-admin@my-cluster".


cormac@cormac-tkgm:~$ kubectl get nodes
NAME                               STATUS   ROLES    AGE     VERSION
my-cluster-control-plane-56hjh     Ready    master   6d17h   v1.19.1+vmware.2
my-cluster-control-plane-gdd2b     Ready    master   6d17h   v1.19.1+vmware.2
my-cluster-control-plane-qwjf6     Ready    master   6d17h   v1.19.1+vmware.2
my-cluster-md-0-6946b5db64-2z8md   Ready    <none>   6d17h   v1.19.1+vmware.2
my-cluster-md-0-6946b5db64-98gnt   Ready    <none>   6d17h   v1.19.1+vmware.2
my-cluster-md-0-6946b5db64-f6wz9   Ready    <none>   4d      v1.19.1+vmware.2
my-cluster-md-0-6946b5db64-rm9sm   Ready    <none>   6d17h   v1.19.1+vmware.2
my-cluster-md-0-6946b5db64-smdtl   Ready    <none>   6d1h    v1.19.1+vmware.2


cormac@cormac-tkgm:~$ kubectl get sc
NAME                PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
default (default)   csi.vsphere.vmware.com   Delete          Immediate           false                  6d17h


cormac@cormac-tkgm:~$ cat big-stripe-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: bigstripe
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
  storagePolicyName: "Big-Stripe"


cormac@cormac-tkgm:~$ kubectl apply -f big-stripe-storage-class.yaml
storageclass.storage.k8s.io/bigstripe created


cormac@cormac-tkgm:~$ kubectl get sc
NAME                  PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
bigstripe (default)   csi.vsphere.vmware.com   Delete          Immediate           false                  6s
default (default)     csi.vsphere.vmware.com   Delete          Immediate           false                  6d17h


cormac@cormac-tkgm:~$ cat file-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: file-pvc-demo-5g
spec:
  storageClassName: bigstripe
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi


cormac@cormac-tkgm:~$ kubectl apply -f file-pvc.yaml
persistentvolumeclaim/file-pvc-demo-5g created

cormac@cormac-tkgm:~$ kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
file-pvc-demo-5g   Bound    pvc-b5db5000-2fe5-4aba-935a-69f7b70aae85   5Gi        RWX            bigstripe      6s


cormac@cormac-tkgm:$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-b5db5000-2fe5-4aba-935a-69f7b70aae85   5Gi        RWX            Delete           Bound    default/file-pvc-demo-5g   bigstripe               <invalid>

And because of the vSphere CNS feature, we can see much of the Kubernetes information related to this volume bubbled up in the vSphere Client, details of which I have mentioned many times before in this blog. However, I wanted to add it here again to highlight how the CSI driver used by TKG integrates with the CNS feature.

The volume is also visible in the vSAN File Service File Share view:

A final step is to deploy two Pods to show that they are able to share access to the same RWX PV.

cormac@cormac-tkgm:~$ cat file-pod-a.yaml
apiVersion: v1
kind: Pod
metadata:
  name: file-pod-a
spec:
  containers:
  - name: file-pod-a
    image: "cormac-tkgm.corinternal.com/library/busybox"
    volumeMounts:
    - name: file-vol
      mountPath: "/mnt/volume1"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: file-vol
      persistentVolumeClaim:
        claimName: file-pvc-demo-5g


cormac@cormac-tkgm:~$ cat file-pod-b.yaml
apiVersion: v1
kind: Pod
metadata:
  name: file-pod-b
spec:
  containers:
  - name: file-pod-b
    image: "cormac-tkgm.corinternal.com/library/busybox"
    volumeMounts:
    - name: file-vol
      mountPath: "/mnt/volume1"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: file-vol
      persistentVolumeClaim:
        claimName: file-pvc-demo-5g


cormac@cormac-tkgm:~$ kubectl apply -f file-pod-a.yaml
pod/file-pod-a created


cormac@cormac-tkgm:~$ kubectl apply -f file-pod-b.yaml
pod/file-pod-b created


cormac@cormac-tkgm:~$ kubectl get pods
NAME         READY   STATUS              RESTARTS   AGE
file-pod-a   1/1     Running             0          9s
file-pod-b   0/1     ContainerCreating   0          4s


cormac@cormac-tkgm:~$ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
file-pod-a   1/1     Running   0          36s
file-pod-b   1/1     Running   0          31s
And if we return to CNS, we can see both Pods associated with the PV in Kubernetes objects view:

So yes, you can use vSAN File Services for read-write-many container volumes with the multi-cloud edition of TKG.

Exit mobile version