vSphere CSI driver versions and capabilities

The vSphere Container Storage Interface (CSI) driver is what enables Kubernetes clusters running on vSphere to provision persistent volumes on vSphere storage. This applies to both native Kubernetes clusters, and vSphere with Kubernetes. With the release of vSphere 7.0 and vSphere with Kubernetes (formerly Project Pacific) there are now a number of different flavors of the vSphere CSI driver available. [Update] Before going any further, it is worth highlighting the differences between what we term native Kubernetes and vSphere with Kubernetes. Native Kubernetes has many flavors, such as VMware Tanzu Kubernetes Grid, VMware Tanzu Kubernetes Grid Integrated (TKGI) formerly known…

vSphere 7.0, Cloud Native Storage, CSI and offline volume extend

Another new feature added to the vSphere CSI driver in the vSphere 7.0 release is the ability to offline extend / grow a Kubernetes Persistent Volume (PV). This requires a special directive to be added to the StorageClass and, as per the title, the operation must be done offline whilst the PV is detached from any Pod. Let’s take a closer look at the steps involved. New CSI component – CSI Resizer To enable resizing operations, a new component has been added to the vSphere CSI Controller called csi-resizer. We can examine the csi-resizer and other components associated with the…

vSphere 7.0, Cloud Native Storage, CSI and encryption support

A common request we’ve had for the vSphere CSI (Container Storage Interface) driver is to support encryption of Kubernetes Persistent Volumes using the vSphere feature called VMcrypt. Although we’ve had VM encryption since vSphere 6.5, this was a feature that we could not support in the first version of the CSI driver that we shipped with vSphere 6.7U3. However, I’m pleased to announce that we can now support this feature with the new CSI driver shipping with vSphere 7.0. The reason we can support it in vSphere 7.0 is that First Class Disks, also known as Improved Virtual Disks, now…

vSphere 7.0, Cloud Native Storage, CSI and vVols support

With the release of vSphere 7.0, we also announced enhancements to our Cloud Native Storage (CNS) offering. One of the new features that we now offer in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via our updated version of the vSphere Container Storage Interface (CSI) driver. In this post, I will walk through the steps involved in consuming vVols via Kubernetes manifest files when dynamically provisioning PVs. I will also show some enhancements to our CNS UI in vSphere 7.0 so that you can easily identify vVol backed PVs. Step 1…

Deploying vSphere with Kubernetes via VCF 4.0 SDDC Manager (Video)

In this post, I am going to share another short video that I made which highlights the main steps involved when deploying vSphere with Kubernetes from VCF 4.0 SDDC Manager. You can find the complete steps here in this previous post which shows how to deploy vSphere with Kubernetes in a Workload Domain. The video will talk you through the validation steps that are done in SDDC Manager, and then show you the complete vSphere with Kubernetes deployment in the vSphere UI. We will see the configuration changes that are made to NSX-T during the process as well. At the…

Deploying flannel, vSphere CPI and vSphere CSI with later versions of Kubernetes

I recently wanted to deploy a newer versions of Kubernetes to see it working with our Cloud Native Storage (CNS) feature. Having assisted with the original landing pages for CPI and CSI, I’d done this a few times in the past. However, the deployment tutorial that we used back then was based on Kubernetes version 1.14.2. I wanted to go with a more recent build of K8s, e.g. 1.16.3. By the way, if you are unclear about the purposes of the CPI and CSI, you can learn more about them on the landing page, here for CPI and here for…

vtopology – Insights into vSphere infrastructure from kubectl

As I got more and more familiar with running Kubernetes on top of vSphere, I came to the realization that it might be useful to be able to query the vSphere Infrastructure from Kubernetes, particularly via kubectl. For example, I might like to know some of the details about the master nodes and worker nodes (e.g. which ESXi host are they on?, how much resources are they consuming?). Also, if I have a persistent volume, how can I query which vSphere datastore is it on, which policy is it using, what is the path to the VMDK? Therefore I started…