VMworld 2021 – My Top 10 Picks around Kubernetes

Well here we are again – another VMworld has come around. As most of you will know, VMworld 2021 is going to be another “fully virtual” event (no pun intended), the same as it was for VMworld 2020. Hard to imagine that it is 3 years since I presented at VMworld 2018 in Las Vegas, and 2 years since I presented at VMworld EMEA 2019 in Barcelona. Strange days indeed. Let’s hope we can all get together at VMworld 2022 and have a blast. Like previous years, I have picked out a few presentations that I plan on attending at…

vSAN File Service backed Persistent Volumes Network Access Controls [Video]

A short video to demonstrate how network access to Kubernetes Persistent Volumes, that are backed by vSAN File Service file shares, can be controlled. This allows an administrator to determine who has read-write access and who has read-only access to a volume, based on the network from which they are accessing the volume. This involves modifying the configuration file of the vSphere CSI driver, as shown in the following demonstration. The root squash parameter can also be controlled using this method. This links to a more detailed step-by-step write-up on how to configure the CSI driver configuration file and control…

vSAN File Service backed RWX Persistent Volume Quota [Video]

A short video to demonstrate how vSAN File Service file shares, which are used to back dynamically created Kubernetes read-write-many persistent volumes (PVs) have an implicit hard quota associated with them. Read-Write-Many (RXW) PVs are volumes which can be shared between multiple Kubernetes Pods. For more details about this feature, please check out this earlier blog post.

Adding Network Permissions to Kubernetes PVs backed by vSAN File Share

Last week I looked at how quotas were implicit on Kubernetes RWX Persistent Volumes which were instantiated on vSAN File Service file shares. This got me thinking about another feature of Kubernetes Persistent Volumes –  how could some of the other parameters associated with file shares be controlled? In particular, I wanted to control which networks could access a volume, what access permissions were allowed from that network and whether we could squash root privileges when a root user accesses a volume? All of these options are configurable from the vSphere client and are very visible when creating file shares…

CSI Topology – Configuration How-To

In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…

vSphere CSI v2.2 – Online Volume Expansion

The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done. Requirements Note: This feature requires vSphere 7.0 Update 2 (U2).…

VCP to vSphere CSI Migration in Kubernetes

When VMware first introduced support for Kubernetes, our first storage driver was the VCP, the in-tree vSphere Cloud Provider. Some might remember that this driver was referred to as Project Hatchway back in the day. This in-tree driver allows Kubernetes to consume vSphere storage for persistent volumes. One of draw-backs to the in-tree driver approach was that every storage vendor had to include their own driver in each Kubernetes distribution, which ballooned the core Kubernetes code and made maintenance difficult. Another drawback of this approach was that vendors typically had to wait for a new version of Kubernetes to release…