Building a TKG Guest Cluster in vSphere with Kubernetes

Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may be many TKG guest clusters deployed on the same vSphere with Kubernetes infrastructure. Isolation/Multi-Tenancy is achieved through namespaces. Multiple namespaces may be created with one or more TKG guest clusters in…

A first look at vSphere with Kubernetes in action

In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…

Deploying flannel, vSphere CPI and vSphere CSI with later versions of Kubernetes

I recently wanted to deploy a newer versions of Kubernetes to see it working with our Cloud Native Storage (CNS) feature. Having assisted with the original landing pages for CPI and CSI, I’d done this a few times in the past. However, the deployment tutorial that we used back then was based on Kubernetes version 1.14.2. I wanted to go with a more recent build of K8s, e.g. 1.16.3. By the way, if you are unclear about the purposes of the CPI and CSI, you can learn more about them on the landing page, here for CPI and here for…

Project Pacific – VMworld 2019 Deep Dive Updates

I’m sure most readers will be somewhat familiar with VMware’s Project Pacific at this point. It really is the buzz of VMworld 2019. If I had to describe Project Pacific in as few words as possible, it is a merging of vSphere and Kubernetes (K8s) with the goal of enabling our customers to deploy new, next-gen, distributed, modern applications which may be comprised of container workloads or combined container and virtual machine workloads. Not only that, we also need to provide our customers with a consistent way of managing, monitoring and securing these new modern applications. This is where Project…

CNS – not just for vSAN

After a very eventful VMworld, we received lots of questions about CNS, the Cloud Native Storage feature that was released with vSphere 6.7U3. Whilst most of the demonstrations and blog articles around CNS focused on vSAN, what may have been missed is that this feature also works with both VMFS and NFS datastores. For that reason, I decided to create some examples of how CNS can also bubble up information in vSphere about Kubernetes Persistent Volumes (PVs) created on both VMFS and NFS datastores. Let’s begin by creating some simple policies to tag my VMFS datastore and my NFS datastore.…

Safekeeping – a useful tool for interacting with First Class Disks/Improved Virtual Disks

I have been doing quite a bit of work on First Class Disks (FCD), also known as Improved Virtual Disks (IVD) over the past number of months. One tool that has been extremely useful in improving my understanding of FCDs has been safekeeping, a tool developed by Max Daneri of VMware and which is now available to download on GitHub. If you did not know, FCDs are used extensively in VMware’s new Cloud Native Storage (CNS) offering that is currently available with vSphere/vSAN 6.7U3. Now, whilst the primary aim of this tool is to help backup vendors become familiar with…

Moving a Stateful App from VCP to CSI based Kubernetes cluster using Velero

Since the release of the vSphere CSI driver in vSphere 6.7U3, I have had a number of requests about how we plan to migrate applications between Kubernetes clusters that are using the original in-tree vSphere Cloud Provider (VCP) and Kubernetes clusters that are built with the new vSphere CSI driver. All I can say at this point in time is that we are looking at ways to seamlessly achieve this at some point in the future, and that the Kubernetes community has a migration design in the works to move from in-tree providers to the new CSI driver as well.…