Static Persistent Volumes and Cloud Native Storage

Recently I was asked if “statically” provisioned persistent volumes (PVs) in native, vanilla, Kubernetes would be handled by Cloud Native Storage (CNS) in vSphere 7.0 and in turn appear in the vSphere client, just like a dynamically provisioned persistent volume. The short answer is yes, this is supported and works. The details on how to do this are shown here in this post.

I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are not considered CSI persistent volumes, and so will not appear in CNS. The second method is to use the vSphere CSI driver, which also has the ability to bubble the volume up to the CNS and the vSphere client UI. Let’s look at both options.

In-tree NFS driver (no CNS interop)

You can use the following manifest to test the difference between in-tree NFS and out-of-tree CSI. Here are a set of manifest files that can be used to statically provision an NFS based persistent volume using the in-tree NFS and have it mounted to a Pod. These are YAML files for a Pod, PVC and PV. The Pod runs busybox and mounts the NFS volume to /nfs.

apiVersion: v1
kind: Pod
metadata:
  name: nfs-client-pod
  namespace: nfs-static
spec:
  containers:
  - name: busybox
    image: "k8s.gcr.io/busybox"
    volumeMounts:
    - name: nfs-vol
      mountPath: "/nfs"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol
      persistentVolumeClaim:
        claimName: nfs-client-pvc


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-client-pvc
  namespace: nfs-static
spec:
  storageClassName: nfs-client-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi


apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-client-pv
  namespace: nfs-static
spec:
  storageClassName: nfs-client-sc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: "10.27.51.214"
    path: "/static-pv-test"

Of interest here are the references to the StorageClassName in both the PV and PVC. These simply form a relationship between the PVC and the PV. In the PersistentVolume YAML, you can also see the in-tree NFS references with both the server and the path to the volume.

However, after creating this volume, it does not appear in the vSphere UI. To do that, we must use the CSI driver when attaching and mounting statically provisioned volumes.

Out-of-tree CSI driver with CNS

To begin, we need to find which file share that we wish to mount to our Pod. I am going to use an existing read-write-many (RWX) file share created on vSAN File Services. Note how the file share is represented with a folder icon:

From this output, I can extract the UUID of the file share. This is used to reference the share, rather than using the server and path details seen in the in-tree NFS driver. Here are the manifest files used to add a statically provisioned file share to a Pod, and also have it appear in CNS.

apiVersion: v1
kind: Pod
metadata:
  name: nfs-client-pod-csi
spec:
  containers:
  - name: busybox
    image: "k8s.gcr.io/busybox"
    volumeMounts:
    - name: nfs-vol-csi
      mount Path: "/nfs"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol-csi
      persistentVolumeClaim:
        claimName: static-pvc-csi


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-pvc-csi
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      static-pv-label-key: static-pv-label-value
  storageClassName: ""


apiVersion: v1
kind: PersistentVolume
metadata:
  name: static-pv-csi
  annotations:
    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
  labels:
    "static-pv-label-key": "static-pv-label-value"
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  csi:
    driver: "csi.vsphere.vmware.com"
    fsType: nfs4
    volumeAttributes:
      type: "vSphere CNS File Volume"
     "volumeHandle": "file:26ca8a57-2ec1-46c9-9baf-06d409abb293"

There is not much different in the Pod manifest used for the CSI approach when compared to the in-tree NFS driver, apart from a few name changes.

The PersistentVolumeClaim manifest is a bit different, in so far as it is now using a selector with matchLabels rather than StorageClassName to tie it to the PersistentVolume. This is just an alternative way of binding the PVC to the PV.

The PersistentVolume manifest is quite a bit different when it comes to the spec. There are new metadata annotations and labels to bind it with the PVC. However, the in-tree NFS has now been replaced with the out-of-tree CSI. Here we can see a reference to the CSI driver, the filesystem type, some volume attributes and a volume handle. The volumeHandle matches the UUID we retrieved from vSAN File Services earlier.

If we now go ahead and deploy this application (PV, PVC and Pod), we can see the volume appear in CNS.

$ kubectl get pvc
NAME             STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
static-pvc-csi   Bound    static-pv-csi   1Gi        RWX                           11m


$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
static-pv-csi   1Gi        RWX            Delete           Bound    default/static-pvc-csi                           11m


$ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
nfs-client-pod-csi   1/1     Running   0          58s
Let’s revisit the vSAN File Services file share view. Notice how the volume has changed from a file share to a container volume, represented by a disk icon rather than a folder icon.
And we can also see this container volume in the Container view.
Click on the details view to see more information about the PV. CNS is now providing detailed information about the statically provisioned PV.

To conclude, statically provisioned NFS volumes are fully supported by CNS if the CSI driver is used to provision them rather than the in-tree NFS driver. Do note that the CSI scope is within a single vCenter server. Thus there is no way to make a statically provisioned NFS based persistent volume available to a different Kubernetes cluster that resides on different vSphere infrastructure managed by a different vCenter server at this time. The “volumeHandle” reference in the PersistentVolume manifest would not be known by CNS on that other vCenter. An alternative option is to use the in-tree NFS driver if there is a requirement to do this. If you do have a requirement to have this cross-mount scenario handled by CNS-CSI, please let me know. We are always interested in learning more about these use-cases.