Building a TKG Guest Cluster in vSphere with Kubernetes

Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may be many TKG guest clusters deployed on the same vSphere with Kubernetes infrastructure. Isolation/Multi-Tenancy is achieved through namespaces. Multiple namespaces may be created with one or more TKG guest clusters in…

A first look at vSphere with Kubernetes in action

In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…

Read-Write-Many Persistent Volumes with vSAN 7 File Services

A few weeks back, just after the vSphere 7.0 launch event, I wrote an article about Native File Services in vSAN 7.0. I had a few questions asking why we decided on NFS support in this initial release, and not something like SMB or some other protocol. The reason is quite straight-forward. We are positioning vSAN as a platform for both traditional virtual machine workloads and newer containerized workloads. We chose NFS to address a storage requirement in Kubernetes, namely a way to share Persistent Volumes between Pods. To date, the vSphere CSI driver only provisioned block based Persistent Volumes…

Getting started with VCF 4.0 Part 3 – vSphere with Kubernetes in a Workload Domain

At this point, we have a fully configured workload domain which includes an NSX-T Edge deployment. Check here for the previous VCF 4.0 deployment steps. We are now ready to go ahead and deploy vSphere with Kubernetes, formerly known as Project Pacific. Via SDDC Manager in VMware Cloud Foundation 4.0, we ensure that an NSX-T Edge is available, and we also ensure that the the Workload Domain is sufficiently licensed to enable vSphere with Kubernetes. Disclaimer: “To be clear, this post is based on a pre-GA version of the VMware Cloud Foundation 4.0. While the assumption is that not much…

Getting started with VCF 4.0 Part 2 – Commission hosts, Create Workload Domain, Deploy NSX-T Edge

Now that a VCF 4.0 Management Domain has been deployed, we can move onto creating our very first VCF 4.0 Virtual Infrastructure Workload Domain (VI WLD). We will require a VI WLD with an NSX-T Edge cluster before we can deploy Kubernetes on vSphere (formerly known as Project Pacific). Not too much has changed in the WLD creation workflow since version 3.9. We still have to commission ESXi hosts before we can create the WLD. But something different to previous versions of VCF is that today in VCF 4.0 we can automatically provision NSX-T Edge clusters from SDDC Manager to…

Getting started with VMware Cloud Foundation (VCF) 4.0

On March 10th, VMware announced a range of new updated products and features. One of these was VMware Cloud Foundation (VCF) version 4.0. In the following series of blogs, I am going to show you the steps to deploy VCF 4.0. We will begin with the deployment of a Management Domain. Once this is complete, we will commission some additional hosts and build our first workload domain (WLD). After that, we will deploy the latest version of NSX-T Edge Cluster to our Workload Domain. The great news here is that this part has now been automated in VCF 4.0. Finally,…

Track vSAN Memory Consumption in vSAN 7

One of the most common requests in relation to vSAN performance is how much CPU and memory does vSAN actually consume on an ESXi host, i.e. what is the overhead of running vSAN. Through the vSAN Performance Service, we have been able to show both host and vSAN CPU usage for some time. However, up to now, we have only been able to show host memory usage, and not overhead attributed to vSAN. It has also been extremely difficult to determine how much memory vSAN required.  Way back in 2014, with the first vSAN version 5.5 release, I wrote this…