I’m writing this post because of a misconception I had regarding how read-only volumes were configured in Kubernetes. I thought this was controlled by the accessModes parameter in the PersistentVolumeClaim manifest file. This is not the case. It is controlled from the Pod, which to me seems a bit strange. Why would this not be controlled from the PVC manifest? One of our engineers pointed me to a few Kubernetes discussions on the behaviour of accessModes and readOnly here and here. It would seem that I am not the only one confused by this behaviour. In this post, I deploy…
Another new feature added to the vSphere CSI driver in the vSphere 7.0 release is the ability to offline extend / grow a Kubernetes Persistent Volume (PV). This requires a special directive to be added to the StorageClass and, as per the title, the operation must be done offline whilst the PV is detached from any Pod. Let’s take a closer look at the steps involved. New CSI component – CSI Resizer To enable resizing operations, a new component has been added to the vSphere CSI Controller called csi-resizer. We can examine the csi-resizer and other components associated with the…
A common request we’ve had for the vSphere CSI (Container Storage Interface) driver is to support encryption of Kubernetes Persistent Volumes using the vSphere feature called VMcrypt. Although we’ve had VM encryption since vSphere 6.5, this was a feature that we could not support in the first version of the CSI driver that we shipped with vSphere 6.7U3. However, I’m pleased to announce that we can now support this feature with the new CSI driver shipping with vSphere 7.0. The reason we can support it in vSphere 7.0 is that First Class Disks, also known as Improved Virtual Disks, now…
With the release of vSphere 7.0, we also announced enhancements to our Cloud Native Storage (CNS) offering. One of the new features that we now offer in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via our updated version of the vSphere Container Storage Interface (CSI) driver. In this post, I will walk through the steps involved in consuming vVols via Kubernetes manifest files when dynamically provisioning PVs. I will also show some enhancements to our CNS UI in vSphere 7.0 so that you can easily identify vVol backed PVs. Step 1…
Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. [Update] Whilst guest cluster isn’t an official name for the Tanzu Kubernetes cluster, I’ll use it in this post to differentiate it from the Supervisor cluster deployed with vSphere with Kubernetes. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may…
In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…
I recently wanted to deploy a newer versions of Kubernetes to see it working with our Cloud Native Storage (CNS) feature. Having assisted with the original landing pages for CPI and CSI, I’d done this a few times in the past. However, the deployment tutorial that we used back then was based on Kubernetes version 1.14.2. I wanted to go with a more recent build of K8s, e.g. 1.16.3. By the way, if you are unclear about the purposes of the CPI and CSI, you can learn more about them on the landing page, here for CPI and here for…