In this example, the vSphere CSI (Container Storage Interface) driver has already been deployed with no considerations taken into account for vSAN File Service network permissions. Therefore, any PVs backed by file shares will be read-write accessible from any IP address by default. We can verify this by SSH’ing onto my K8s cluster control plane node and displaying the current contents of the vsphere-config-secret. This holds the vSphere CSI driver configuration file located in /etc/kubernetes/csi-vsphere.conf. While the CSI file contents are base64 encoded, we can decode it to see the original configuration, as shown below (slightly modified).
$ kubectl get secret vsphere-config-secret -n kube-system NAME TYPE DATA AGE vsphere-config-secret Opaque 1 72d $ kubectl get secret vsphere-config-secret -n kube-system -o yaml apiVersion: v1 data: csi-vsphere.conf: CltHbG9iYWxdCmNsd<<<--shortened--->>>4cy16b25lCg== kind: Secret metadata: creationTimestamp: "2021-04-28T10:09:00Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:csi-vsphere.conf: {} f:type: {} manager: kubectl-create operation: Update time: "2021-04-28T10:09:00Z" name: vsphere-config-secret namespace: kube-system resourceVersion: "2502" uid: 1fa17219-0d91-437c-831b-094710723f56 type: Opaque $ echo "CltHbG9iYWxdCmNsd<<<--shortened--->>>4cy16b25lCg==" | base64 -d [Global] cluster-id = "cormac-upstream" cluster-distribution = "native" [VirtualCenter "AA.BB.51.106"] user = "administrator@vsphere.local" password = "******" port = "443" insecure-flag = "1" datacenters = "OCTO-Datacenter" [Labels] region = k8s-region zone = k8s-zone
So nothing here about network permission. Let’s now assume that I only want to provide read-write access to one of my networks (VLAN 51), but provide read-only access to a different network (VLAN62). The steps to achieve this are:
- Build a new csi-vsphere.conf file or modify the current one if it already exists
- Delete the existing vsphere-config-secret
- Create a new vsphere-config-secret with the new csi-vsphere.conf contents
- Create a new RWX PV backed by vSAN and verify that it has new network permissions
Here is the contents of my new csi-vsphere.conf, located in /etc/kubernetes on the control plane node. Note the additional of two additional NetPermissions stanzas. Note the quotes around the names “VLAN51” and “VLAN62”. Strings are expected here, and I had to quote them to get them recognized. The contents of each stanza is pretty straight-forward. Access from network 51 is given read-write permissions while access from network 62 has read-only permissions. Note that “VLAN51” and “VLAN62” are simply identifiers here – I could have called them anything so long as the string is unique. The identifiers do not have any bearing on the underlying network topology, or anything important like that. For the sake of security, I have hidden the password and masked the first 2 octets of my network range with AA and BB respectively.
$ cat csi-vsphere.conf [Global] cluster-id = "cormac-upstream" cluster-distribution = "native" [VirtualCenter "AA.BB.51.106"] user = "administrator@vsphere.local" password = "*********" port = "443" insecure-flag = "1" datacenters = "OCTO-Datacenter" [NetPermissions "VLAN51"] ips = "AA.BB.51.0/24" permissions = "READ_WRITE" rootsquash = false [NetPermissions "VLAN62"] ips = "AA.BB.62.0/26" permissions = "READ_ONLY" rootsquash = false [Labels] region = k8s-region zone = k8s-zone
Now I need to delete the current secret and create a new one with the new contents of the csi-vsphere.conf. I am doing this operation from the /etc/kubernetes folder on the control plane nodes in my K8s cluster.
$ kubectl delete secret vsphere-config-secret --namespace=kube-system secret "vsphere-config-secret" deleted $ kubectl create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system secret/vsphere-config-secret created
If you wanted to, you could dump out the secret in yaml format and decode it once again to make sure the contents have been updated successfully. Assuming the secret is successfully created, there is no need to do anything with the CSI driver. It will automatically use the updated secret and CSI configuration. The next step is to create a new dynamic, RWX PV backed by vSAN File Service file share. Below are some manifests to do just that. The first is the StorageClass manifest, and while it reference a storage policy that resolves to a vSAN datastore, it is the inclusion of the line csi.storage.k8s.io/fstype: nfs4 that indicates that this volume should be build using a file share rather than block storage on vSAN.
% cat file-sc-netperms.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsan-file-netperms provisioner: csi.vsphere.vmware.com parameters: storagepolicyname: "vsanfs-octo-c" csi.storage.k8s.io/fstype: nfs4
With the StorageClass in place, there is another manifest that creates the Persistent Volume Claim along with two busybox Pods which share access to the resulting Persistent Volume. Each Pod runs a command that simply writes a message into a file (called index.html) on the file share/NFS mount (/mnt/volume1). If the Pods have read-write access, then they should both be able to write to the file on the share. The sleep part of the command just keeps the pod running after it has been started.
% cat file-pvc-pod-netperms.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: file-pvc-netperms spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi storageClassName: vsan-file-netperms --- apiVersion: v1 kind: Pod metadata: name: app-1-netperm spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: ["/bin/sh", "-c", "echo 'hello from app1' >> /mnt/volume1/index.html && while true ; do sleep 2 ; done"] volumeMounts: - name: file-volume-netp mountPath: /mnt/volume1 restartPolicy: Always volumes: - name: file-volume-netp persistentVolumeClaim: claimName: file-pvc-netperms --- apiVersion: v1 kind: Pod metadata: name: app-2-netperm spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: ["/bin/sh", "-c", "echo 'hello from app2' >> /mnt/volume1/index.html && while true ; do sleep 2 ; done"] volumeMounts: - name: file-volume-netp mountPath: /mnt/volume1 restartPolicy: Always volumes: - name: file-volume-netp persistentVolumeClaim: claimName: file-pvc-netperms
Let’s now examine the volume from the vSphere client and see what the Network Permissions look like:
Excellent! Looks like the permissions have taken effect. The network “VLAN51” has read-write access while the network “VLAN62” has read-only access. Last test is to just make sure that my containers have been able to mount the volume, and are able to write to the volume.
% kubectl exec -it app-2-netperm -- sh / # df -h Filesystem Size Used Available Use% Mounted on overlay 58.0G 10.6G 47.4G 18% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 58.0G 10.6G 47.4G 18% /dev/termination-log vsan-fs1.rainpole.com:/521bbf46-3370-bdf5-ce53-b804ad20c24f 2.0T 0 2.0T 0% /mnt/volume1 /dev/sda1 58.0G 10.6G 47.4G 18% /etc/resolv.conf /dev/sda1 58.0G 10.6G 47.4G 18% /etc/hostname /dev/sda1 58.0G 10.6G 47.4G 18% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm tmpfs 1.9G 12.0K 1.9G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 1.9G 0 1.9G 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 64.0M 0 64.0M 0% /proc/sched_debug tmpfs 1.9G 0 1.9G 0% /proc/scsi tmpfs 1.9G 0 1.9G 0% /sys/firmware / # cd /mnt/volume1/ /mnt/volume1 # ls index.html /mnt/volume1 # cat index.html hello from app2 hello from app1 /mnt/volume1 #
Looks good to me. Both Pods, running on K8s worker nodes connected to the “VLAN51” network, have been able to write to the volume. We have one final test to do, and that is to make sure that I cannot write to these volumes if I try to access them from a network that has read-only permissions. We will do the same steps as those highlighted previously, and this time set “VLAN51” permissions to read-only. This is the network where my K8s cluster nodes reside. Now if I deploy my simple application once again, the file share is once again created but all networks have read-only access:
And now if I examine my Pods which are trying to write to a file on the read-only file share, I see some issues:
% kubectl get pods NAME READY STATUS RESTARTS AGE app-1-netperm 0/1 CrashLoopBackOff 3 78s app-2-netperm 0/1 Error 3 78s
% kubectl logs app-1-netperm /bin/sh: can't create /mnt/volume1/index.html: Read-only file system
This looks like it is working as expected. The Pods are unable to write to this volume since the network only has been given read-only permissions. Hopefully this has given you some insight into how you can manage network access to RWX Kubernetes Persistent Volumes backed by vSAN File Service file shares.
For customers using TKGI, Tanzu Kubernetes Grid Integrated, there is a slightly different approach taken to achieve the creation of network permissions. For TKGI editions, the configuration file is created locally with the required network configuration, and then passed to the cluster using the tkgi command line tool. Click here for further information on how to customize, deploy and manage vSphere CNS volumes using the vSphere CSI Driver in TKGI.