CNS-CSI 2.1 with vSphere 7.0U1 – What’s new?

In this post, we will look at what is in the new release of the vSphere CSI driver for Kubernetes, as well as enhancements to Cloud Native Storage (CNS)  that handles CSI request on the vSphere infrastructure. CSI improvements will be available in version 2.1 of the driver, and the CNS components will be part of vSphere 7.0U1. Both are required for the features discussed here. The main objective of this release is two-fold: (a) to add CNS-CSI features to vSphere with Kubernetes so that it has a similar specification to the CNS-CSI features that are available with vanilla Kubernetes, and (b) introduce the CSI migration feature for Kubernetes distributions that continue to use the original VCP (vSphere Cloud Provider) driver for provisioning persistent volumes on vSphere storage. Let’s look at each of these in turn.

CNS-CSI 2.1 Improvements in vSphere with Kubernetes

There are a handful of improvements that have been made in this version of the CNS-CSI driver.

1. Volume Health

This feature will report the health of volumes that have been deployed on vSphere storage and can be queried from the PersistentVolumeClaim details. This feature is available in the vSphere with Kubernetes Supervisor Cluster and Tanzu Kubernetes Grid (TKG) “guest” clusters in vSphere 7.0U1.

chogan@chogan-a01 ~ % kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
demo-pvc-vsan   Bound    pvc-d347c2ed-fcc6-4e3a-9fd2-a293fa568835   2Gi        RWO            vsan-default-storage-policy   36m


chogan@chogan-a01 ~ % kubectl describe pvc | grep volumehealth
               volumehealth.storage.kubernetes.io/health: accessible

More details regarding Volume Health can be found here.

2. Volume Placement Observability

Another nice feature is that Physical Volume Placement details can now be seen directly from the Containers Volume view in the vSphere client.

3. Static PV support in both Supervisor and TKG cluster

This release also sees the ability to support Static Persistent Volumes in both the vSphere with Kubernetes Supervisor cluster, as well as the Tanzu Kubernetes Grid clusters. One of the main uses cases for Static PVs is to be able to re-use an existing PV, and schedule a new Pod or Pods to use it. This could be from a previously deployed application, or indeed a PV that has been restored from a backup. Further details about Static Persistent Volumes provisioning can be found here.

4. Offline Volume Grow

This was a feature that was introduced in the vSphere 7.0 CNS-CSI release for vanilla Kubernetes, although it was still a beta feature at the time. I talked about it in detail in this blog post. In this 7.0U1 release, we added the same functionality to Tanzu Kubernetes Grid clusters. It is still an offline operation, meaning that the Pod will need to be unscheduled, and rescheduled after the size of the PV has been increased. More details on how to do an offline volume grow operation can be found here.

CNS-CSI 2.1 Improvements in vanilla Kubernetes

There is one major improvement that we are making to the vanilla Kubernetes CSI driver.

1. CSI Migration (beta)

While we could offline migrate stateful applications between VCP and CSI using Velero, there is a requirement to enable this to be done seamlessly with a Live Migration feature. CSI Migration will be available in the Kubernetes 1.19 release as beta functionality. To be able to use migration of volumes from the earlier, in-tree VCP to CSI on vSphere, customers will also need vSphere 7.0U1 and CSI driver version 2.1. Persistent Volume (PV) operations, run on a Kubernetes cluster with the VCP, will be seamlessly redirected to the out-of-tree CSI driver.

Migration between the older, in-tree VCP and the newer vSphere CSI Driver is complicated by the fact that the VCP uses the path to a VMDK for the Persistent Volume lifecycle operations, whereas the CSI Driver uses the First Class Disk ID. The APIs needed for migration workflows (relating VMDKs to FCDs, and vice-versa) were added to vSphere 7.0U1.

More detailed information will follow, but this should certainly assist customers who are on distributions or have applications that currently use the older, in-tree, vSphere Cloud Provider (VCP), as at some point the in-tree VCP will be deprecated and removed from the Kubernetes distributions.

2 Replies to “CNS-CSI 2.1 with vSphere 7.0U1 – What’s new?”

  1. Hi!
    I’m currently utilizing a TKG Kubernetes within vSphere with Kubernetes. Is it possible to create read write many PVCs with vSphere 7U1, natively?

    Our current solution is providing our own storage class.

    1. Hi Kevin, I guess you are asking if we have vSAN File Services support in vSphere with Tanzu for the TKG guest in 7.0U1? At this time, the answer is no, we cannot dynamically provisioned RWX volumes back by vSAN File Services. This is on our to-do.

      If you require read-write-many file shares, one option is to use the built-in Kubernetes NFS driver. However, I do not believe this has dynamic capabilities, and the share needs to be created in advance before it can be mounted to the Kubernetes Pods.

Comments are closed.