NSX-T and vSphere with Tanzu – automatically created network objects and services

In my most recent posts, the steps to get NSX-T to a point where it is ready for vSphere with Tanzu are examined. A three-part blog series describes the NSX-T setup process for vSphere with Tanzu – see part 1, part 2, and part 3. In this post, we will take a look ‘under the covers’. I will look at the network objects and services that vSphere with Tanzu automatically builds in NSX-T.

As per these previous configuration steps, a number of NSX-T system objects are setup, such as Compute Manager and Edge Cluster. Some network objects must also be created in advance. These include IP Address Pools for the Host and Edge TEPs, a Network Segment, and a Tier-0 gateway associated with the Edge Cluster for external network access. These items altogether are consumed by what is commonly referred to as the system namespace, and is essential for a correctly functioning Supervisor cluster. We will now examine the network objects and services that are automatically created once the Supervisor cluster is deployed.

Network Overview and Topology

The easiest place to begin is with Network Overview, found under the Networking tab in the NSX Manager. This is what is displayed when vSphere with Tanzu has been enabled and the Supervisor cluster has been deployed.

The following are the new objects and services created, which I have highlighted:

  • 1 x Tier-1 Gateway
  • 1 x Segment (1 existed already – a segment for external access)
  • 1 x DHCP Server
  • 3 x IP Pools (2 existed already – IP Pools for Host TEPs and Edge TEPs)
  • 3 x NAT Rules
  • 2 x Load Balancers

Let’s now take a look at the Network Topology to see how all of this ties together.

We can see the three virtual machines that make up the Supervisor cluster in the lower part of the diagram above. These are all on the same newly-created segment, and are connected to the same newly-created Tier-1 gateway. This gateway is then connected to the already existing Tier-0 Gateway, which provides external access via the Edge Cluster (not shown). Note that the Tier-1 has 3 Services, while the Tier-0 has a single Service.

Let’s look at the newly created segment next.

Segment & IP Address Pools

By click on the segment in the Network Topology view above, more details can be learnt about it.

As shown here, we get the full name of the segment and also that is of type overlay. If we navigate to the Segment view, we can learn more details about it. The two items of particular interest are shown below. The first is the number of ports consumed on the segment. There are 3, one for each of the Supervisor nodes / virtual machines. The second item of interest in the IP Address Pool. This is where the IP addresses assigned to the Supervisor virtual machine network interfaces on this segment are retrieved. One other thing to note the subnet 10.244.0.1/28 associated with the segment. This is part of the Namespace Network CIDR (10.244.0.0./20) that was provided with the Workload Network configuration at Supervisor cluster creation time.

Let’s take a look at the ports. This view appears if the port count number highlighted above is clicked in the NSX Manager UI.

Let’s look at the IP Address Pools next. This view can be found under the Networking tab, IP Management section. There are 3 IP Address Pools created in total when the Supervisor cluster is enabled.

One IP Address Pool is for the Namespace Network as mentioned earlier. The other two are IP Address Pools for the Ingress and Egress ranges which were also provided with the Workload Network configuration at Supervisor cluster creation time. Here is the IP Address Pool for Ingress.

And here is the IP Address pool for the Egress.

We can match these to the Workload Network configuration in vSphere with Tanzu. Note the Namespace Network, Ingress and Egress entries. We will also see where the Services CIDR is used shortly.

As the name suggests, the Ingress is used for ‘Load Balancer’ or ‘Ingress’ type services on the Supervisor cluster.  The Egress is used when there is a SNAT (Source Network Address Translation) requirement for traffic exiting the Supervisor Cluster Namespace to access external services.

That completes the explanation on the automatically created Segment and IP Address Pools. The next step is to examine the Network Services associated with the Tier-1 Gateway which is also automatically created, and to which the segment examined above is attached.

Tier-1 Network Services – Load Balancer

Earlier, when examining the Tier-1 Gateway in the Network Topology view, 3 services were observed. The 3 services are as follows, and all are automatically created during deployment. We can click on the Tier-1 Gateway to view them in more detail.

Let’s take the Load Balancer first. This is a server load balancer, as opposed to the distributed load balancer. Load balancers can be viewed from the Load Balancing view, which can be found under the Networking tab, Network Services section.

Note the number of virtual servers associated with each Load Balancer. The server load balancer has 4, whilst the distributed load balancer has 29. Let’s look at the server load balancer virtual servers first.

There are two ports available on each load balancer. The top two are for CSI metrics, which you can read about here. The second is the K8s API server, on port 6443. But also available is port 443 which, when you connect to it, displays the Kubernetes tools landing page for interacting with vSphere with Tanzu. Note that the IP addresses listed here come from the Ingress range provided during Supervisor cluster setup, as shown above.

We can also list the virtual servers on the distributed load balancer. These are Supervisor cluster services which do not need to reach out externally (e.g. ClusterIP type). The IP addresses come from the Services CIDR (10.96.0.0/23) which is also configured at setup time. This can also be seen in the workload networking configuration shown above. You can get a similar list by running a kubectl get svc -A on the Supervisor cluster.

That completes the automated Load Balancer setup overview. Let’s look at the NAT rules next.

Tier-1 Network Services – NAT

Again, just like the Load Balancer configuration, there is no NAT configuration on the Tier-0 Gateway. NAT is only configured on the Tier-1 Gateway.

There are 3 rules created; 2 are “No SNAT” rules, and one is a “SNAT” rule. The “No SNAT” rules relates to East-West (internal) traffic. The “SNAT” rule relates to North-South (external) traffic. The way I understand these rules to behave is as follows:

  • The first “No SNAT” rule matches the Ingress range. This (as I understand it) is because each vSphere Namespaces, including the system namespace used by the Supervisor cluster, will get its own separate network setup. This include a network segment as well as its own tier-1 gateway and Load Balancer. This first “No SNAT” rule is for traffic between vSphere namespaces through its allocated Load Balancer, and facilitates communicate to other Load Balancer services.
  • The second “No SNAT” is for traffic between standard Kubernetes namespaces in the Supervisor clusters. This facilitates pod to pod communication.
  • The last rule is a SNAT rule which used an Egress IP address, and as mentioned earlier, this is a requirement for traffic exiting the Supervisor Cluster Namespace to access external services. All workloads running in the Supervisor cluster share the same SNAT IP for North-South connectivity.

That completes the NAT overview.

Tier-1 Network Services – Gateway Firewall Rules

Last but not least, we come to the Gateway Firewall rules. To look at these more closely, it is easiest to change contexts to the Security tab, and then select Gateway Firewall under Policy Management. Here you will observe a policy for the Tier-0 and another for the Tier-1. Both are displayed here. At present, the firewall rules appear to allow all traffic between the namespaces and services in the Supervisor cluster.

Let’s do one additional test, and that is to create a new vSphere namespace. Let’s see what new networking objects and services are created by that step.

Create a new vSphere Namespace

I am creating a new vSphere namespace with the default settings. This means that it is using the existing Tier-0 gateway, NAT mode is enabled, and Load Balancer size set to small. Namespace network and Namespace subnet are left at the defaults (10.244.0.0/20). After creating the new vSphere namespace, let’s look at the Network Overview once more.

I’ve highlighted the updated items. We can see that the creation of a new namespace resulted in the creation of the following network objects and services.

  • 1 x Tier-1 Gateway (connected to Tier-0)
  • 1 x Segment (no ports used until something is deployed in the namespace, e.g. vSphere Pod, TKG cluster)
  • 1 x IP Pool (with IP range taken from Namespace network range)
  • 3 x NAT Rules (for the new T1, same as before but with a new SNAT IP address for this namespace)
  • 1 x Load Balancer (of type server, as seen before)

I hope this gives you a good idea of the kinds of networks and services that get created when enabling vSphere with Tanzu Workload Management using NSX-T networking, and successfully deploying a Supervisor cluster.

3 Replies to “NSX-T and vSphere with Tanzu – automatically created network objects and services”

  1. Thank you Cormac.

    I have everything configured and i keep geting this error :
    coredns-d7b67f489-7p4pd 0/1 CrashLoopBackOff 16 (83s ago) 35m
    the whole configuration is stuck at configuring…any ideas?

  2. hello Cormac,

    I am doing an inspection on my firewall i have noticed that there a communication between 10.244.0.2 which is on one of the supervisor nodes and one IP of the ingress subnet on the port 6443 is that normal that communication shouldn’t be internally however NAT mode is enabled

    thank you in advance

Comments are closed.