In my earlier overview of vSphere 8 announcements at VMware Explore 2022, I highlight a number of new products and features. One of the most significant announcements is vSphere Distributed Services Engine, formerly known as Project Monterey. As mentioned in the post, this enhancement gives us the ability to offload tasks to a Data Processing Unit (DPU aka SmartNIC). These tasks have historically been done by x86 CPUs on the hypervisor. Now these tasks are offloading to the DPU. And the programmable hardware accelerator on the DPU is also leveraged to boost overall performance. The first wave of innovation around…
I was looking for a way to migrate VMkernel adapters back from a VDS to a VSS. This was because I am testing various upcoming releases of vCenter Server 8.0 and vSphere with Tanzu. vSphere with Tanzu, if you do not use NSX-T, requires a distributed switch and distributed portgroups. After building out some test environments, I wanted to roll back a distributed switch (VDS) configuration to a standard vSwitch (VSS) configuration. The process seems to have changed a few times in the past, and I could not find anything that demonstrated how to do task on vSphere 7.0. Thus…
Many readers with an interest in Kubernetes, and particularly Tanzu, will be well aware that there is no embedded Load Balancer service provider available in vSphere. Instead, the Load Balancer service needs to be provided through an external source. VMware supports a number of different mechanisms to provide such a service for Tanzu. One of the more popular providers is the NSX Advanced Load Balancer, formerly Avi Vantage. In the most recent release, version 22.1.1, some of the setup steps have changed significantly. In this post, I will highlight the setup of the new NSX ALB. Important: NSX ALB v22.1.1…
This post continues to build on some of the other work already done on vSphere with Tanzu and NSX-T. In previous posts, we’ve seen how to setup NSX-T so it can be used by vSphere with Tanzu. The steps to install NSX-T Manager and prepare ESXi hosts was looked at in part 1. We saw how to set up an NSX-T Edge in part 2. Then in part 3, the steps to create a tier-0 gateway with BGP for dynamic routing shown. Most recently, the various NSX-T objects and services that are configured when the Supervisor cluster is deployed were…
I have been spending a lot of time recently on vSphere with Tanzu and NSX-T. One of the tasks that I want to do is perform a network trace from a pod running on a TKG worker node. This will be for a future post. However, before running the trace, I need to secure shell (ssh) onto a TKG worker node in order to run the traceroute. This is more challenging with NSX-T compared to using vSphere networking. The reason why is because NSX-T provides “internal” network segments for the nodes which sit behind a tier-1 and tier-0 gateway. To…
In my most recent posts, the steps to get NSX-T to a point where it is ready for vSphere with Tanzu are examined. A three-part blog series describes the NSX-T setup process for vSphere with Tanzu – see part 1, part 2, and part 3. In this post, we will take a look ‘under the covers’. I will look at the network objects and services that vSphere with Tanzu automatically builds in NSX-T. As per these previous configuration steps, a number of NSX-T system objects are setup, such as Compute Manager and Edge Cluster. Some network objects must also be…
The steps to deploy NSX-T Manager, create a Compute Manager and configuring NSX on the ESXi hosts were described in part 1 of this series of posts. The steps to create an NSX-T Edge cluster were outlined in part 2. In this part 3 post, we will look at the final step in preparing an NSX-T environment for vSphere with Tanzu, and that is the creation and configuring of a tier-0 gateway. Networks that are created for Kubernetes workloads in vSphere with Tanzu will connect to this tier-0 gateway and subsequently allow external connectivity to the TKG clusters, e.g. developers…