vSphere 7.0, Cloud Native Storage, CSI and vVols support

With the release of vSphere 7.0, we also announced enhancements to our Cloud Native Storage (CNS) offering. One of the new features that we now offer in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via our updated version of the vSphere Container Storage Interface (CSI) driver. In this post, I will walk through the steps involved in consuming vVols via Kubernetes manifest files when dynamically provisioning PVs. I will also show some enhancements to our CNS UI in vSphere 7.0 so that you can easily identify vVol backed PVs.

Step 1 – Build a simple vVols policy

The first thing we need to do is a build a storage policy in vSphere that can be referenced by the Kubernetes StorageClass manifest. Note that the CSI driver is vSphere 7.0 can only support a simple vVol policy. In my storage policy below, I only put a placement rule, requesting that any request for storage using this policy will be placed on a Pure Storage array. This array is vVol capable. Note that at this time, there is currently no storage policy support for vVol snapshots, replication or other advanced features with the vSphere 7.0 CSI driver. You must use the simple policy configuration shown below.

It should be noted that I also have a vVol datastore from the Pure Storage FlashArray already presented to the ESXi hosts where my vanilla Kubernetes cluster is running.

Step 2 – Create a Storage Class and PVC manifest

Here are my simple manifests (YAML files) to consume vVols. Note that my vanilla Kubernetes deployment has been built with the last version of the CSI driver, so this driver can do RWX persistent volumes from vSAN-FS, has vVols support and has other features such as encryption support and volume extend support that I will write about soon. This driver is not yet publicly available, but it should be available for download imminently.

This is my StorageClass manifest. The main points of interest are the provisioner, which is our vSphere CSI driver, and the storagepolicyname parameter, which matches the storage policy built in vSphere. And of course, this policy “VVol-Simple” has the placement rule for my Pure Storage (vVol) FlashArray.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: vvol-sc
provisioner: csi.vsphere.vmware.com
parameters:
  storagepolicyname: "VVol-Simple"

This is my very simple PVC manifest. It references the StorageClass (vvol-sc) for dynamically creating Persistent Volumes. This PVC request the creation of a 7GB volume, using the storage policy in the StorageClass, which basically means a request to create the PV on the Pure Storage (vVol) FlashArray VVol datastore as this is the only storage that can match the policy requirements.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: vvol-pvc
spec:
  storageClassName: vvol-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 7G

Step 3 – Apply manifests and observe array

To begin, theĀ  vVol datastore has no contents:

 

Similarly, the Pure Storage (vVol) array has no vVols.

Once we apply these manifests, the following events take place on the vVol datastore:

  1. A new fcd directory is created on the vVol datastore. This directory contains all of the First Class Disks created on the datastore. Kubernetes Persistent Volumes are backed by First Class Disks (also known as Improved Virtual Disks or IVDs) on vSphere storage. These FCDs look like normal VMDKs, but the difference with FCDs is that they can be worked on directly, outside the context of a VM. You can learn more about FCDs here. The FCD directory is where the FCDs and their respective sidecars live. The sidecars contain some FCD metadata.
  2. A new catalog directory is created on the vVol datastore. This catalog contains the metadata that tracks the FCDs on a datastore. You will see one catalog created per datastore, when FCDs are provisioned on the datastore.

Remember, when we talk about FCDs, we’re really just talking about a special virtual machine disk (VMDK) created on vSphere storage to back a Kubernetes Persistent Volume.

We can now see both of these folders on the vVol datastore.

And if we drill in to the fcd directory, we can see the Persistent Volume represented as a First Class Disk (.vmdk), alongside its .vmfd “sidecar” metadata file.

We can now go ahead and create a new Persistent Volume via another PVC manifest file, using the same StorageClass, and observe what happens. Here is the second manifest, referencing the same StorageClass as before, but with a different volume size request.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: vvol-pvc-2
spec:
  storageClassName: vvol-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1G

And after applying that manifest, we see the second FCD/VMDK and the .vmfd “side-car” appear in the fcd directory on the datastore:

If we now take a look on the Pure Storage FlashArray, we see new Virtual Volumes created, as well as some Volume Groups. There is a Volume Group created for the catalog directory, and another for the fcd directory. If we drill into the fcd directory, there are current 5 volumes. There is 1 x Config vVol representing the fcd directory. Then we have 4 additional vVols, 2 x Data and 2 x Other. The Data vVols are the FCDs/VMDKs and the Other vVols are the .vmfd “sidecar” files that we saw on the datastore earlier. The sizes of the Data vVols match the requests we made in the PVC manifest.

Step 4 – Observe Cloud Native Storage (CNS) in vSphere UI

One of the main advantages of running K8s on top of vSphere is the insight we receive via CNS. Rather than having to keep switching between array views and datastore content views, we put all the information relevant to Persistent Volumes consuming vSphere storage in one place. Using vVols for PVs is no different. Here is the Container Volumes view, showing our two simple PVs in the vSphere UI:

We can see the PV names, that they are of type Block (as opposed to File), we see that they are on the VVolDatastore and are using the storage policy VVol-Simple, which is Compliant.

If I click on the Details icon, we see more information about the Kubernetes objects, as show here. Here we can see the name of the Persistent Volume Claim, as it appeared in our manifest.

If we click on the Basics view, the is more information such as Health Status. Some nice insights directly from the vSphere UI.

For those of you interested in working with both Kubernetes and vVols, vSphere 7.0 now enables you to do just that. Just ensure that you keep the storage policies very simple in this release.

2 Replies to “vSphere 7.0, Cloud Native Storage, CSI and vVols support”

  1. Hi Cormac,

    I have seen a limitation of First Class Disk(FCD). It can not be. snapshotted if it has multi-writer flag on like in case of Oracle RAC. This prevents snapshotting of a database using shared disk. Is there a plan of supporting snapshot of shared FCD?

    Thanks,

    1. Hi Harjit, after speaking to engineering, we would like you to open a service request with GSS to investigate this further. Can you please share the SR here once that task is done? You will need to gather logs and so on. I can then request that this is escalated to our engineering team so that they can take a close look.

Comments are closed.