One of the key features of the TKG 2.0 on vSphere 8 announcement at VMware Explore 2022 is the consolidation of our the Tanzu Kubernetes offerings into a single unified Kubernetes runtime. This can be considered the second edition of VMware Tanzu Kubernetes Grid. It will still come in two flavors. One flavor is as a VM-based standalone management cluster whilst the other flavor will be Supervisor-based, integrated into vSphere with Tanzu. However, the important point is that both flavors now have the same APIs for cluster provisioning, same tooling for extension management, and the same model for release distribution.…
This post continues to build on some of the other work already done on vSphere with Tanzu and NSX-T. In previous posts, we’ve seen how to setup NSX-T so it can be used by vSphere with Tanzu. The steps to install NSX-T Manager and prepare ESXi hosts was looked at in part 1. We saw how to set up an NSX-T Edge in part 2. Then in part 3, the steps to create a tier-0 gateway with BGP for dynamic routing shown. Most recently, the various NSX-T objects and services that are configured when the Supervisor cluster is deployed were…
I have been spending a lot of time recently on vSphere with Tanzu and NSX-T. One of the tasks that I want to do is perform a network trace from a pod running on a TKG worker node. This will be for a future post. However, before running the trace, I need to secure shell (ssh) onto a TKG worker node in order to run the traceroute. This is more challenging with NSX-T compared to using vSphere networking. The reason why is because NSX-T provides “internal” network segments for the nodes which sit behind a tier-1 and tier-0 gateway. To…
Some time back, I wrote a blog post about how to use the network policies available with the Antrea CNI (Container Network Interface). In that post we looked at how to create a simple network policy to prevent communication between pods in a Tanzu Kubernetes cluster, based on pod selectors / labels. We stood up a simply web server and a standalone pod, and showed how the pod could access the web server when no network policies were in place. We then proceeded to create a network policy that only allowed pods to communicate to each other if the pod…
I’ve spent quite a bit of time highlighting many of the new features of vSphere with Tanzu in earlier blog posts. In those posts, we saw how vSphere with Tanzu could be used to provision Tanzu Kubernetes Grid (TKG) guest clusters to provide a native, upstream-like, VMware supported Kubernetes. In this post, I want to delve into the guest cluster in more detail and examine the new, default Container Network Interface (CNI) called Antrea that is now shipping with the TKG guest cluster. Antrea provides networking and security services for a Kubernetes cluster. It is based on the Open vSwitch…
In this final installment of my “vSphere with Tanzu” posts, we are going to look at how to create our very first Tanzu Kubernetes (TKG) guest cluster. In previous posts, we have compared vSphere with Tanzu to VCF with Tanzu, and covered the prerequisites. Then we looked at the steps involved in deploying the HA-Proxy to provide a load balancer service to vSphere with Tanzu. In my most recent post, we looked at the steps involved in enabling workload management. Now that all of that is in place, we are finally able to go ahead and deploy a TKG cluster,…
Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. [Update] Whilst guest cluster isn’t an official name for the Tanzu Kubernetes cluster, I’ll use it in this post to differentiate it from the Supervisor cluster deployed with vSphere with Kubernetes. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may…