Tanzu Kubernetes considerations with the new VM Class in vSphere with Tanzu

I recently posted about a new feature in vSphere with Tanzu called VM Service which became available with vSphere 7.0U2a. In a nutshell, this new service allows developers to provision not just Tanzu Kubernetes Clusters and PodVMs in their respective namespaces. Now they can also provision native Virtual Machines as well. The VM Service introduces a new feature called VirtualMachineClassBindings to a developer, and has also introduced some new behaviour around an existing feature, VirtualMachineClass. VirtualMachineClass describes the available resource sizing for virtual machines. They describe how much compute and memory to allocate to a VM, and also if the…

TKG v1.3 and the NSX Advanced Load Balancer

In my most recent post, we took a look at how Cluster API is utilized in TKG. Note that this post refers to the Tanzu Kubernetes Grid (TKG) multi-cloud version, sometimes referred to as TKGm. I will use this naming convention to refer to the multi-cloud TKG in this post, so that it is differentiated from other TKG products in the Tanzu portfolio. In this post, we will take a closer look at a new feature in TKG v1.3, namely the fact that it now supports the NSX ALB – Advanced Load Balancer (formerly known as AVI Vantage) – to…

A closer look at Cluster API and TKG v1.3.1

In this post, I am going to take a look at Cluster API, and then take a look at some of the changes made to TKG v.1.3.1. TKG uses Cluster API extensively to create workload Kubernetes clusters, so we will be able to apply what we see from the first part of this post to TKG in the second part. There is already an extensive amount of information and documentation available on Cluster API, so I am not going to cover every aspect of it here. This link will take you to the Cluster API concepts, which discusses all the…

A first look at vSphere VM Service

In this post, we will take a look at a brand new service that is now available in vSphere with Tanzu, called the vSphere VM Service. This new services enables developers to create virtual machines on vSphere Infrastructure via Kubernetes YAML manifests, just like they would create Tanzu Kubernetes clusters via the TKG service, or PodVMs via the Pod service, both of which are already available in vSphere with Tanzu. Since we feel that many applications will be made up of both containers and VMs, this is the first step in enabling developers to create these multi-faceted applications via the…

CSI Topology – Configuration How-To

In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…

vSphere CSI v2.2 – Online Volume Expansion

The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done. Requirements Note: This feature requires vSphere 7.0 Update 2 (U2).…

VCP to vSphere CSI Migration in Kubernetes

When VMware first introduced support for Kubernetes, our first storage driver was the VCP, the in-tree vSphere Cloud Provider. Some might remember that this driver was referred to as Project Hatchway back in the day. This in-tree driver allows Kubernetes to consume vSphere storage for persistent volumes. One of draw-backs to the in-tree driver approach was that every storage vendor had to include their own driver in each Kubernetes distribution, which ballooned the core Kubernetes code and made maintenance difficult. Another drawback of this approach was that vendors typically had to wait for a new version of Kubernetes to release…