Last week I looked at how quotas were implicit on Kubernetes RWX Persistent Volumes which were instantiated on vSAN File Service file shares. This got me thinking about another feature of Kubernetes Persistent Volumes – how could some of the other parameters associated with file shares be controlled? In particular, I wanted to control which networks could access a volume, what access permissions were allowed from that network and whether we could squash root privileges when a root user accesses a volume? All of these options are configurable from the vSphere client and are very visible when creating file shares…
Earlier this week, I participated in a customer call around vSAN File Service and Kubernetes Persistent Volumes. I have highlighted the dynamic Read-Write-Many Persistent Volume feature of our vSphere CSI driver in conjunction with vSAN File Service before. Read-Write-Many (RWX) volumes are volumes that can be accessed/shared by multiple containers. During the discussion, one question came up in relation to quota, and if it can be applied to Persistent Volumes which are backed by file shares from vSAN File Service, which is the purpose of this post. Now, for those of you who are familiar with vSAN File Service, you…
A while back, I was looking at ways that I could query vSphere resources and inventory using the Go language. My end goal was to develop a prototype Kubernetes Operator to extend Kubernetes so that a developer or K8s cluster admin could query vSphere resources and inventory from the kubectl interface. While the Go language itself has lots of code examples online, and is relatively intuitive to a novice programmer like myself, I struggled quite a bit with getting to grips with govmomi, the Go library for interacting with the VMware vSphere API. In particular, I had difficultly in trying…
In this post, we will take a look at a brand new service that is now available in vSphere with Tanzu, called the vSphere VM Service. This new services enables developers to create virtual machines on vSphere Infrastructure via Kubernetes YAML manifests, just like they would create Tanzu Kubernetes clusters via the TKG service, or PodVMs via the Pod service, both of which are already available in vSphere with Tanzu. Since we feel that many applications will be made up of both containers and VMs, this is the first step in enabling developers to create these multi-faceted applications via the…
In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…
The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done. Requirements Note: This feature requires vSphere 7.0 Update 2 (U2).…
Last week, I wrote an article which described how to use a combination of vSAN Availability Rules with Tag Based Placement Rules in vSAN Storage Policies to select one specific vSAN datastore for object placement when many vSAN datastores might appear as compatible candidates. I create this short 3m30s video to show how to do this in practice.