The steps to deploy NSX-T Manager, create a Compute Manager and configuring NSX on the ESXi hosts were described in part 1 of this series of posts. The steps to create an NSX-T Edge cluster were outlined in part 2. In this part 3 post, we will look at the final step in preparing an NSX-T environment for vSphere with Tanzu, and that is the creation and configuring of a tier-0 gateway. Networks that are created for Kubernetes workloads in vSphere with Tanzu will connect to this tier-0 gateway and subsequently allow external connectivity to the TKG clusters, e.g. developers…
In part 1 of 3, the steps on how to add vCenter server as the NSX Compute Manager and how to configure the ESXi hosts as host transport nodes were completed. In this part 2 of the series, the creation of an NSX Edge cluster is described. Once again, the end goal of this post is to have an NSX-T configuration that can be leveraged by vSphere with Tanzu. When this part is complete, the overlay network should extend to include the Edge nodes for east-west traffic. The Edge nodes will also be configured to have uplinks to allow for…
It is quite some time since I looked at deploying NSX-T, VMware’s unified networking platform. The reason for this is that VCF, VMware Cloud Foundation, takes care of the deployment and configuration of NSX-T automatically through the SDDC Manager. However, I wanted to revisit it and do it the hard way, just to re-educate myself on the steps involved. The goal is to have an NSX-T configuration that can be leveraged by vSphere with Tanzu. Since this is rather a lengthy process, I will divide it up into 3 separate posts. The first will focus on the configuration ESXi hosts…
Earlier this month, I had my first look at network policies in Tanzu Mission Control (TMC). This earlier post looked at a very simple network policy where I used a web server app, and showed how we could control access to it from other pods by using labels. In this post, I wanted to do something that is a bit more detailed. For the purposes of this test, I will use a pod based NFS server, and then control access to it from other pods who wish to mount the NFS file share from the server pod. I have already…
In this post, I am going to show how I set up my Tanzu Kubernetes Grid management cluster using a proxy configuration. I suspect this may be something many readers might want to try at some point, for various reasons. I will add a caveat to say that I have done the bare minimum to get this configuration to work, so you will probably want to spend far more time than I did on tweaking and tuning the proxy configuration. At the end of the day, the purpose of this exercise is to show how a TKG bootstrap virtual machine…
Over the last week or so, VMware recently announced the release of TKG version 1.4. On reading through the release notes, there were a few features that caught my eye, so I thought I would deploy a cluster and take a closer look. In particular, two features were of interest. The first of these is support for the NSX Advanced Load Balancer (ALB) service in workload clusters, which is available through the Avi Kubernetes Operator (AKO). This is applicable when TKG is deployed on vSphere. There is also new support for the NSX ALB as a control plane endpoint provider.…
A short video to demonstrate how network access to Kubernetes Persistent Volumes, that are backed by vSAN File Service file shares, can be controlled. This allows an administrator to determine who has read-write access and who has read-only access to a volume, based on the network from which they are accessing the volume. This involves modifying the configuration file of the vSphere CSI driver, as shown in the following demonstration. The root squash parameter can also be controlled using this method. This links to a more detailed step-by-step write-up on how to configure the CSI driver configuration file and control…