CSI Topology – Configuration How-To

In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…

vSphere CSI v2.2 – Online Volume Expansion

The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done. Requirements Note: This feature requires vSphere 7.0 Update 2 (U2).…

VCP to vSphere CSI Migration in Kubernetes

When VMware first introduced support for Kubernetes, our first storage driver was the VCP, the in-tree vSphere Cloud Provider. Some might remember that this driver was referred to as Project Hatchway back in the day. This in-tree driver allows Kubernetes to consume vSphere storage for persistent volumes. One of draw-backs to the in-tree driver approach was that every storage vendor had to include their own driver in each Kubernetes distribution, which ballooned the core Kubernetes code and made maintenance difficult. Another drawback of this approach was that vendors typically had to wait for a new version of Kubernetes to release…

Task “Delete a virtual storage object” reports “A specified parameter was not correct”

I’ve recently been looking at the vSphere Velero Plugin, and how the latest version of the plugin enables administrators to backup and restore vSphere with Tanzu Supervisor cluster objects as well as Tanzu Kubernetes “guest” cluster objects. This plugin utilizes vSphere snapshot technology, so that a Kubernetes Persistent Volume (PV) backed by a First Class Disk (FCD) in vSphere can be snapshot, and the snapshot is then moved by a Data Manager appliance to an S3 object store bucket. Once the data movement operation has completed, the snapshot is removed from the PV/FCD. During the testing of this new functionality,…

CNS-CSI 2.1 with vSphere 7.0U1 – What’s new?

In this post, we will look at what is in the new release of the vSphere CSI driver for Kubernetes, as well as enhancements to Cloud Native Storage (CNS)  that handles CSI request on the vSphere infrastructure. CSI improvements will be available in version 2.1 of the driver, and the CNS components will be part of vSphere 7.0U1. Both are required for the features discussed here. The main objective of this release is two-fold: (a) to add CNS-CSI features to vSphere with Kubernetes so that it has a similar specification to the CNS-CSI features that are available with vanilla Kubernetes,…

Failed to deploy PV to local volume – “No compatible datastore found for storagePolicy”

This is something that I “spun my wheels” on a little bit last week, so I decided I’d write a short article to explain the issue in a bit more detail. This is related to the provisioning of a Persistent Volume on the Supervisor cluster of a vSphere with Kubernetes deployment. I had a local VMFS volume on one of my hosts, so I went ahead and tagged the volume using vSphere Tagging. I then built a tag-based storage policy so that when that policy is selected for provisioning, the objects that get provisioned would be placed on that local,…

Helm Chart for vSphere CSI driver

After recently presenting on the topic of the vSphere CSI driver, I received feedback from a number of different people that the current install mechanism is a little long-winder and prone to error. The request was for a Helm Chart to make things a little easier. I spoke to a few people about this internally, and while we have some long term plans to make this process easier, we didn’t have any plans in the short term. At that point, I reached out to my colleague and good pal, Myles Gray, and we decided we would try to create our…