NSX ALB v22.1.1 – New Setup Steps

Many readers with an interest in Kubernetes, and particularly Tanzu, will be well aware that there is no embedded Load Balancer service provider available in vSphere. Instead, the Load Balancer service needs to be provided through an external source. VMware supports a number of different mechanisms to provide such a service for Tanzu. One of the more popular providers is the NSX Advanced Load Balancer, formerly Avi Vantage. In the most recent release, version 22.1.1, some of the setup steps have changed significantly. In this post, I will highlight the setup of the new NSX ALB. Important: NSX ALB v22.1.1…

First steps with the NSX Advanced Load Balancer (NSX ALB)

As part of the vSphere 7.0 Update 2 (U2) launch, VMware now provides another Load Balancer option for vSphere with Tanzu. This new Load Balancer, built on Avi Networks technology (and previously known as Avi Vantage), provides another production-ready load balancer option for your vSphere with Tanzu deployments. This Load Balancer, now called the NSX Advanced Load balancer, or NSX ALB for short, will provide Virtual IP addresses (VIPs) for the Supervisor Control Plane API server, the TKG (guest) clusters API server and any Kubernetes applications that require a service of type Load Balancer. In this post, I will go…

Getting started with VCF Part 4 – vRA Deployment

After taking care of all of the prerequisite steps highlighted in my VMware Cloud Foundation Part 3 post, we are now ready to deploy vRealize Automation (vRA) via vRealize Suite Lifecycle Manager (vRSLM) in the VCF SDDC Manager. This will be a relatively shorter “show and tell” post, which will take you through the deployment steps. It will also show you how you can monitor the progress of the vRA deployment. The complete deployment does take some time since there are quite a number of virtual appliances and virtual machines that need to be rolled out for vRA (11 in…

Reviewing PKS logs and status

After a bit of a sabbatical, I am back to looking PKS (Pivotal Container Service) again. I wanted to look at the new version 1.3, but I had to do a bit of work on my environment to allow me to do this. Primarily, I needed to upgrade my NSX-T environment from version 2.1 to 2.3. I followed this blog post from vmtechie which provides a useful step-by-step guide. Kudos to our VMware NSX-T team as the upgrade worked without a hitch. My next step was to start work on the PKS deployment. I just did a brand new deployment…

Next steps with NSX-T Edge – Routing and BGP

If you’ve been following along on my NSX-T adventures, you’ll be aware that at this point we have our overlay network deployed, and our NSX-T edge has been setup to with DHCP servers attached to my logical switch, which in turn provides IP addresses to my virtual machines. This is all fine and well, but I’d also like these VMs to reach the outside world. NSX-T enables this through a feature called logical routers. In this post, I will talk you through how to configure a tier 0 logical router which connects to the outside world, a tier 1 logical router…

First Steps with NSX-T Edge – DHCP server

Now that we have an overlay network deployed, its time to turn our attention to the NSX-T Edge, and get it to do something useful for us. A NSX-T Edge can do many useful things for you (Routing, NAT’ing, etc). But I really want to keep things as simple as possible, so I will deploy my NSX-T Edge to provide DHCP addresses to my VMs. In order to do this, my Edge will first of all need to participate in the same overlay/tunnel network as my hosts. I will then need to create a logical switch that my VMs can…

Building a simple ESXi host overlay network with NSX-T

I’ve recently begun to look at NSX-T. My long-term goal is to use it to enable me to build multiple Kubernetes clusters used PKS, the Pivotal Container Service. The hope is then to look at some cool storage related items with Kubernetes. But first things first. Kudos to both Sam McGeown and William Lam for their excellent blogs on NSX-T. However, I’m coming at this as a newbie, and I’m not using a nested environment, but rather a 4 nodes physical environment in my lab. And I am also not separating my cluster into management and production, but rather using…