Virtually Speaking Podcast Episode #174: vSphere with Tanzu

I’m sure most readers are now aware that we now have 2 versions of what was initially called “Project Pacific” at VMworld 2019. Our initial release with vSphere 7.0 (vSphere with Kubernetes) was only available with VCF & NSX-T. However, with the release of vSphere 7.0U1, whilst we continue to have VCF with Tanzu, there is a new version outside of VCF called vSphere with Tanzu. I have written about how to get started with this new version, from covering the prerequisites, deploying a HA-Proxy, enabling vSphere with Tanzu Workload Management and deploying your first TKG ‘guest’ cluster. In this…

Deploy TKG ‘guest’ cluster in vSphere with Tanzu [Video]

In a previous video, we looked at the steps involved in enabling vSphere with Tanzu / Workload Management. That video concluded with the creation of a vSphere Namespace. In this video, we will demonstrate how to login to the namespace, how to create a Tanzu Kubernetes Grid (TKG) ‘guest’ cluster via a simple manifest / YAML file, and then how to change contexts so that a developer can work in the context of the new TKG guest cluster. This video accompanies a more detailed write-up on deploying a TKG guest cluster in vSphere with Tanzu.

Enabling vSphere with Tanzu using HA-Proxy [Video]

In this video, we will look at the steps involved in vSphere 7.0U1 to enable vSphere with Tanzu / Workload Management. The process will look at how different this is to VCF with Tanzu, which leverages NSX-T for networking functionality. Here we show what properties need to be provided to successfully enabled vSphere with Tanzu when a HA-Proxy is providing the Load Balancer / Virtual Server functionality for both the Supervisor control plane API server, as well as the Tanzu Kubernetes Grid ‘guest’ clusters API servers. The demonstration will complete with the creation of our first Namespace. This video accompanies…

Deploying Tanzu Kubernetes “guest” cluster in vSphere with Tanzu

In this final installment of my “vSphere with Tanzu” posts, we are going to look at how to create our very first Tanzu Kubernetes (TKG) guest cluster. In previous posts, we have compared vSphere with Tanzu to VCF with Tanzu, and covered the prerequisites. Then we looked at the steps involved in deploying the HA-Proxy to provide a load balancer service to vSphere with Tanzu. In my most recent post, we looked at the steps involved in enabling workload management. Now that all of that is in place, we are finally able to go ahead and deploy a TKG cluster,…

Enabling vSphere with Tanzu using HA-Proxy

In earlier posts, we look at the differences between the original “VCF with Tanzu” offering and the new vSphere with Tanzu offering from VMware. One of the major differences is the use of HA-Proxy to provide a load balancing service, and the deployment steps of the HA-Proxy we covered in detail in a follow-up post. In this post, we are now ready to deploy vSphere with Tanzu, also known as enabling Workload Management. Prerequisites Revisited The prerequisites were covered in detail in the “Getting started” post, and you won’t have been able to successfully deploy the HA-Proxy without following them.…

Getting started with vSphere with Tanzu

With the release of vSphere 7.0U1, vSphere with Kubernetes has been decoupled from VMware Cloud Foundation (VCF). VMware now has two vSphere with Kubernetes offerings, the original VCF based vSphere with Kubernetes offering, now referred to as VCF with Tanzu, and a newer offering outside of VCF, referred to as vSphere with Tanzu. This write-up is to step through the deployment of the new vSphere with Tanzu with HA-Proxy. I won’t cover everything in this single post, but will do a series of 4 posts stepping through the process. Differences: VCF with Tanzu and vSphere with Tanzu I thought it…

Failed to deploy PV to local volume – “No compatible datastore found for storagePolicy”

This is something that I “spun my wheels” on a little bit last week, so I decided I’d write a short article to explain the issue in a bit more detail. This is related to the provisioning of a Persistent Volume on the Supervisor cluster of a vSphere with Kubernetes deployment. I had a local VMFS volume on one of my hosts, so I went ahead and tagged the volume using vSphere Tagging. I then built a tag-based storage policy so that when that policy is selected for provisioning, the objects that get provisioned would be placed on that local,…