NSX-T and vSphere with Tanzu revisited (part 3 of 3)

The steps to deploy NSX-T Manager, create a Compute Manager and configuring NSX on the ESXi hosts were described in part 1 of this series of posts. The steps  to create an NSX-T Edge cluster were outlined in part 2. In this part 3 post, we will look at the final step in preparing an NSX-T environment for vSphere with Tanzu, and that is the creation and configuring of a tier-0 gateway. Networks that are created for Kubernetes workloads in vSphere with Tanzu will connect to this tier-0 gateway and subsequently allow external connectivity to the TKG clusters, e.g. developers connecting to the kubectl API server, or end-users accessing K8s-based applications. Once the initial tier-0 gateway is created, additional configuration steps such as setting up BGP and Route Distribution are explored.

1. Create External Segments

The first step is to create an external segment. Navigate to Networking > Segments in the NSX Manager UI. Under NSX, click on Add Segment. The information to provide at this point are the segment name, the transport zone (which will be the Edge uplink transport zone created in part 2, step 5) and the VLAN (which is set to 0 since, as explained previously, VLAN tagging in this environment is done at the distributed portgroup level). You do not need to add the tier-0 as a connected gateway, or the gateway IP address – this will be done when we create the interfaces on the tier-0 later on in step 3.

Repeat this step for any additional segments that you wish to create.

2. Create Tier-0 Gateway

Navigate to Networking > Tier-0 Gateways in the NSX Manager UI. Click on Add Gateway, and select Tier-0 from the drop-down list. Initially, there are only a handful of items to configure. The first is the name of the tier-0 gateway. Then decide on the HA Mode, which can be Active-Active or Active-Standby. The only other options are the add the Edge cluster and select the Fail Over mode if the HA mode is set to Active-Standby. Then click Save so that other options can be configured.

3. Add External Interfaces to Tier 0 Gateway

After saving the initial configuration of the tier-0, additional configuration steps can now be carried out. The next step is to add external interfaces to the tier-0. Click on the Set link next to External and Service Interfaces, then click on Add Interface. We are creating External interfaces. The required settings are (i) provide a name for the interface, (ii) provide an IP Address in CIDR format, (iii) choose which segment to use, i.e. one which we created in step 1 and finally (iv) select the Edge node. The remaining entries can be left at the default. Click Save once again, but do no close the editing as we still have more configuration settings to make. Here is the first interface, and it is using the first Edge node.

Here is the second interface, using the second Edge node.

Both interfaces have been successfully added.

Note that the IP addresses that I am using for these interfaces allow me to connect to my upstream, external router. In the next step, I will configure BGP (Border Gateway Protocol) which allows the upstream router to learn about the virtual networks that NSX-T is creating, and vice-versa. This will be especially important for Load Balancer IP addresses used for both Kubernetes applications and API servers, as it will mean that developers/SREs as well as end-users can communicate with them.

Repeat this step for any additional interfaces that you wish to create.

4. Configure HA VIP

The HA Virtual IP Address now needs to be created. This requires an IP address in CIDR format, along with the two interfaces that were created in step 3. Select both interfaces. Click Add, then Apply the HA VIP configuration, and continue to edit the tier-0.

5. Configure BGP on the Tier 0 Gateway

There are two steps to configuring BGP. The first step is to configure the local BGP attributes, and then configure the attributes of the remote BGP neighbor, i.e. the upstream router. For the local (T0) BGP, the only setting need is the Local AS,  in this example 64512.

For the BGP Neighbor, click on the BGP Neighbors Set link, Click on Add BGP Neighbor and add the appropriate information. This includes (i) the IP Address of the remote upstream physical router, as well as (ii) the remote AS number and (iii) a route filter. You may also need to set a password to connect to the upstream router. The password can  be configured in this window.

A Route Filter is required when configuring the BGP Neighbor. Simply click on the Route Filter Set link, then Add Route Filter to add it. Leave the IP address family to IPv4 and Enabled. No other settings are required. Click Add then Apply to set the route filter, and then Save to save the BGP Neighbor configuration.

6. Configure Route Re-distribution on the Tier 0 Gateway

There is one final configuration option – Route Re-distribution. This is the traffic sources that we want the BGP Neighbor to learn about from the Tier 0, and vice-versa. Click on the Set link for Route Re-distribution, then Add Route Re-Distribution. Provide (i) a name which I set it to default, (ii) leave the destination protocol to BGP and then (iii) click on the Route Re-distribution Set link to select the traffic sources to share.

In this setup, these are the sources that I have selected to re-distribute. Apply the selection, then Add the Route Re-distribution configuration. Apply, then Save the tier-0 configuration, and you can now close the editing of the tier-0 gateway editor.

And that now completes the setup of the tier-0 gateway.

7. Validating the configuration

We can use the NSX Edge command line interface to check various settings. First of all, now that the tier-0 gateway has been created and configured to connect to physical infrastructure, we should see the Service Router and the Distributed Router when we display the logical routers. The Distributed Router (DR) takes care of routing the east-west traffic (E-W), whilst the Service Router (SR) takes care of routing the north-south traffic (N-S), and provide connectivity to physical infrastructure such as upstream routers. The DR is instantiated across all transport nodes (Host and Edge nodes). The SR is instantiated on the Edge nodes. As we have seen, the Edge nodes are able to send/receive overlay traffic to/from the ESXi hosts.  Thus, traffic from a VM hosted on an ESXi host goes through the Edge node on a overlay network, to connect to a device in physical infrastructure.

nsxedge1-cormac> get logical-routers
Tue Apr 26 2022 UTC 07:43:23.542
Logical Router
UUID                                   VRF   LR-ID  Name    Type                       Ports   Neighbors
736a80e3-23f6-5a2d-81d6-bbefb2786666   0     0              TUNNEL                     3       2/5000
9b92534f-c329-415d-95a3-241ac7501684   22    2050   SR-T0   SERVICE_ROUTER_TIER0       5       1/50000
c61e7ecf-be14-4f10-bec1-f22785920b60   24    14     DR-T0   DISTRIBUTED_ROUTER_TIER0   6       3/50000

The NSX Edge CLI allows us to query the BGP information and status. By selecting the Service Router VRF from the list of logical routers, we can query the BGP neighbor as well as the routes provided by BGP. In this setup, the default route of 0.0.0.0/0 is provided through the upstream router, so all traffic is routed there.

nsxedge1-cormac> vrf 22

nsxedge1-cormac(tier0_sr[22])> get bgp neighbor 192.168.105.253
BGP neighbor is 192.168.105.253, remote AS 65113, local AS 64512, external link
  BGP version 4, remote router ID 192.168.255.0, local router ID 192.168.105.1
  BGP state = Established, up for 1d23h12m
  Last read 00:00:50, Last write 00:00:48
  Hold time is 180, keepalive interval is 60 seconds
  Configured hold time is 180, keepalive interval is 60 seconds
  Neighbor capabilities:
.
.--snip--
.

nsxedge1-cormac(tier0_sr[22])> get route bgp

Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, o - OSPF
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR,
> - selected route, * - FIB route

Total number of routes: 1

b  > * 0.0.0.0/0 [20/0] via 192.168.105.253, uplink-435, 21:56:30
Tue Apr 26 2022 UTC 07:43:42.269
nsxedge1-cormac(tier0_sr[22])>

This all looks good, and we now have an NSX-T network infrastructure that we can use for vSphere with Tanzu. For completeness, here is the link to part 1 of 3 and here is the link to part 2 of 3.

2 Replies to “NSX-T and vSphere with Tanzu revisited (part 3 of 3)”

  1. Hi, second interface, using the second Edge node.the picture are wrong. It’s look you paste the first picture. beside this thank you for great post

    1. Yes – I have the wrong image for that. Fixed now – put the correct screenshot in place. Thanks for highlighting Ruslan.

Comments are closed.