TKG & vSAN File Service for RWX (Read-Write-Many) Volumes

A common question I get in relation to VMware Tanzu Kubernetes Grid  (TKG) is whether or not it supports vSAN File Service, and specifically the read-write-many (RWX) feature for container volumes. To address this question, we need to make a distinction into how TKG is being provisioned. There is the multi-cloud version of TKG, which can run on vSphere, AWS or Azure, and are deployed from a TKG manager. Then there is the embedded TKG edition where ‘workload clusters’ are deployed in Namespaces via vSphere with Tanzu / VCF with Tanzu. To answer the question about whether or not TKG…

Deploying TKG v1.2.0 in an internet-restricted environment using Harbor

In this post, I am going to outline the steps involved to successfully deploy a Tanzu Kubernetes Grid  (TKG) management cluster and workload clusters in an internet restricted environment. This is often referred to as an air-gapped environment. Note that for part of this exercise, a virtual machine will need to be connected to the internet in order to pull down the images requires for TKG. Once these have been downloaded and pushed up to our local Harbor container image registry, the internet connection can be removed and we will work in a completely air-gapped environment. Note that TKG here…

Creating developer users and namespaces (scripted) in TKG “Guest” Clusters

I’ve spent a lot of time recently on creating and building out vSphere with Tanzu environment, with the goal of deploying a Tanzu Kubernetes “guest” cluster. I frequently used the kubectl-vsphere command to logout of the Supervisor namespace context and login to the Guest cluster context. This allowed me to start deploying stateful and stateful apps in my Tanzu Kubernetes Guest cluster. I thought no more about this step until a recent conversation with my colleague Frank Denneman. He queried whether or not Kubernetes developers would actually have vSphere privileges to do this. It was a great question which led…

Persistent Volume Placement in HCI-Mesh deployments

One of the new features introduced in vSphere 7.0U1 is HCI-Mesh, the ability to remotely mount vSAN datastores between vSAN clusters managed by the same vCenter Server. My buddy and colleague Duncan has done a great write-up on this topic on his yellow-bricks blog. In this post, I am going to look at how to address the situation of selecting the correct vSAN datastore when provisioning Kubernetes Persistent Volumes in an environment which uses HCI-Mesh. Let’s start with why this situation needs additional consideration. Let’s assume that there is a vSphere cluster that have vSAN enabled, and thus this cluster…

Virtually Speaking Podcast Episode #174: vSphere with Tanzu

I’m sure most readers are now aware that we now have 2 versions of what was initially called “Project Pacific” at VMworld 2019. Our initial release with vSphere 7.0 (vSphere with Kubernetes) was only available with VCF & NSX-T. However, with the release of vSphere 7.0U1, whilst we continue to have VCF with Tanzu, there is a new version outside of VCF called vSphere with Tanzu. I have written about how to get started with this new version, from covering the prerequisites, deploying a HA-Proxy, enabling vSphere with Tanzu Workload Management and deploying your first TKG ‘guest’ cluster. In this…

Deploy TKG ‘guest’ cluster in vSphere with Tanzu [Video]

In a previous video, we looked at the steps involved in enabling vSphere with Tanzu / Workload Management. That video concluded with the creation of a vSphere Namespace. In this video, we will demonstrate how to login to the namespace, how to create a Tanzu Kubernetes Grid (TKG) ‘guest’ cluster via a simple manifest / YAML file, and then how to change contexts so that a developer can work in the context of the new TKG guest cluster. This video accompanies a more detailed write-up on deploying a TKG guest cluster in vSphere with Tanzu.

Enabling vSphere with Tanzu using HA-Proxy [Video]

In this video, we will look at the steps involved in vSphere 7.0U1 to enable vSphere with Tanzu / Workload Management. The process will look at how different this is to VCF with Tanzu, which leverages NSX-T for networking functionality. Here we show what properties need to be provided to successfully enabled vSphere with Tanzu when a HA-Proxy is providing the Load Balancer / Virtual Server functionality for both the Supervisor control plane API server, as well as the Tanzu Kubernetes Grid ‘guest’ clusters API servers. The demonstration will complete with the creation of our first Namespace. This video accompanies…