Site icon CormacHogan.com

NSX-T and vSphere with Tanzu revisited (Part 2 of 3)

In part 1 of 3, the steps on how to add vCenter server as the NSX Compute Manager and how to configure the ESXi hosts as host transport nodes were completed. In this part 2 of the series, the creation of an NSX Edge cluster is described. Once again, the end goal of this post is to have an NSX-T configuration that can be leveraged by vSphere with Tanzu. When this part is complete, the overlay network should extend to include the Edge nodes for east-west traffic. The Edge nodes will also be configured to have uplinks to allow for north-south traffic, which will connect it to the upstream physical router. We will do this setup in two parts. First the Edge nodes will be built with the overlay network only, and then the VLAN uplinks for upstream connectivity will be added as a second step. Note that the methodology chosen here is simply one way of configuring the Edge, i.e. since the Edge has 4 interfaces in total, we will configure 1 x management interface, 1 x overlay interface and 2 x VLAN interfaces. If your environment allows it, the same set of interfaces could of course be used for both overlay and VLAN networks. Similarly I am tagging my VLANs at the distributed portgroup layer which means that I will not be tagging within NSX. You may choose an alternate approach and trunk your portgroups, then add the VLAN tags them within NSX. Like I said, there are many ways to approach this. The choice is yours. With that in mind, let’s begin.

1. Create an IP Pool for Edge Tunnel Endpoints (TEPs)

This is an almost identical step to that which was implemented in part 1, step 6 for the host TEPs. This IP Pool which will provide the IP addresses for the Edge TEPs. Navigate to Networking > IP Address Pools, under IP Management. Add a subnet of IP addresses that can be used for the Edge TEPs, such as those shown below:

In VCF, there is a requirement for the overlay network to have the Host and Edge on separate VLANs but which are route-able to one another. My understanding is that this restriction was in NSX-T, but as of version 3.1 of NSX-T, hosts and the Edges can now be on the same VLAN. However, in this case, I still have them on separate VLANs. If you are a VCF user, you are still required to place the host and Edge on separate VLANs. As per the VCF 4.4 Network Design Guide,  you should continue to “Use a dedicated VLAN for the edge overlay network that is segmented from the host overlay VLAN”. It is also called out in the NSX Edge Cluster prerequisites for VCF 4.4.

2. Create Edge Uplink Overlay Profile

Again, we saw how to do create profiles for the hosts in part 1, step 4. It is repeated here, but now the purpose is to create a profile for the uplink of the NSX Edge appliance to participate in the overlay / tunnel with the ESXi hosts. Note that on this occasion, there is only a single teaming uplink. This is because the edge appliance has a maximum of 4 interfaces, with one allocated for management, one for the overlay uplink (east/west traffic) and the remaining 2 for VLAN uplinks (north/south traffic) which connect to upstream router. As mentioned at the outset, this is simply one way of configuring the Edge. Navigate to System > Fabric > Profile in the NSX Manager UI. Select Uplink Profile, then Add Profile. Note from the screenshot below that I have set the Transport VLAN to 0. This is because I have tagged the VLANs on the distributed portgroups on the vSphere distributed switch. If you do not want to use this approach, maybe because your distributed portgroup are trunked, you should set the Transport VLAN ID appropriately at this point. Lastly, if no MTU is provided, it defaults to 1700.

3. Create Edge Transport Node

In part 1, step 8, we looked at how to create host transport nodes. Now we are going to do something similar by creating an Edge transport node. Navigate to System > Fabric > Nodes and select Add Edge Node. This will launch the wizard to begin populating information about the Edge node. The first window below requests for a name and FQDN of the node. An optional description can also be added. You are also asked to provide the form factor for the Edge. The recommendation for production environments is to select a Large configuration, but since this is in my own lab, I have chosen the default setting of Medium.

The next window prompts for various passwords for the NSX Edge node logins. I have also enabled SSH, since the ability to login to the Edge appliance to run some commands can be very useful when troubleshooting.

In the ‘Configure Deployment’ section, the vCenter server that was set up as the Compute Manager is part 1, step 1 is selected. You must also select the vSphere cluster and a datastore.

In the ‘Configure Node Settings’ the management interface for the Edge node is configured. Remember that the Edge node can have a maximum of 4 uplinks. One of those uplinks is for the management network, and that is what is being configured in this step. In this case, I have chose a static management IP assignment, and provided a CIDR format IP address and a default gateway. I have also chosen the distributed portgroup to attach the interface to. Search Domain Names, DNS Servers and NTP Servers should also be populated.

Now we come to the part where we can configure the overlay and uplink networks. In the ‘Configure NSX’ section, I am going to add this Edge to the same management overlay network as the ESXi hosts. In part 1, step 7 this was done by creating a Transport Node Profile for the ESXi hosts. Here we can do it directly whilst creating the Edge node. Similar attributes need to be added as before, such as an uplink profile created in step 2 above and the IP Pool created in step 1 above. Finally, the Teaming Policy Uplink Mapping requires the selection of a distributed portgroup to attach the Edge node uplink to. This port group must be able to communicate to the ESXi hosts via the uplinks which they used for the management overlay in part 1, step 7. In my example, I labeled the distributed portgroup VL3001-NSX-Edge-TEP-DPG, and selected it as shown below.

Click on Finish to complete the creation of the Edge node. Note that the ‘Configure NSX’ section, I could have extended the configuration to include the VLAN Uplinks to provide the north-south traffic for this environment. However, to keep things a little simpler, at least to my mind, I am going to do this separately to the initial creation of the Edge node.

4. Create Edge VLAN Uplink Profile

Let’s now proceed with the remaining Edge networking configuration. The next step is to create another Uplink Profile for the Edge VLANs. We have already seen how to do this for the overlay, so as before, navigate to System > Fabric > Profile in the NSX Manager UI. It is created with a single uplink, and as before the Transport VLAN is set to 0 (since it is tagged at the distributed portgroup level) and with no MTU set, the default is 1700.

5. Create Edge VLAN Transport Zones

The next step is to create two VLAN Transport Zones, one for each VLAN uplink on the Edge nodes. Navigate to System > Fabric > Transport Zones in the NSX Manager UI. These are simple to create, as we’ve already seen.  This time however the Traffic Type is VLAN rather than Overlay as shown below.

6. Modify Edge node to include VLAN Uplinks

Now we can use the VLAN Uplink Profile and VLAN Transport Zone to modify the Edge node, which currently only has a management interface and an overlay interface. The remaining two interfaces will be added for VLAN uplinks to provide north-south traffic. Navigate once more to System > Fabric > Nodes, select the existing Edge node and click on Edit. In this example, rather than modifying the existing switch, I have added 2 new switches, one for each uplink. In each switch, I added a Transport Zone from step 5 and uplink profile from step 4. I also chose another distributed port group which is on the correct VLAN for communicating upstream. Again, this configuration could be done in other ways, such as defining multiple Transport Zones on the same switch.

You should now repeat steps 3 and 6 to create the second Edge node.

7. Add Edge Nodes to Edge Cluster

Now navigate to System > Fabric > Profile in the NSX Manager UI. Select Edge Cluster Profiles, click on Add Profile and create a new profile for the Edge cluster. You can use the default settings.

Navigate once more to System > Fabric > Nodes, select Edge Cluster, click on Add Edge Cluster. Select the Edge Cluster Profile created just now, and add both Edge transport nodes to the cluster by selecting them in the available pane, and clicking on the arrow to move them to the selected pane.

That completes the setup of the NSX Edge cluster.

8. Validating the Edge Cluster

The first thing you should notice is that the Edge nodes now have a tunnel. This should be visible in the Edge Transport Nodes view.

It should also be possible to examine the Edge nodes via the vSphere client. Each Edge node should now have 4 interfaces based on the configuration we provided here. There should be one management interface, one overlay interface and 2 VLAN interfaces. Note that this is simply one way of implementing the Edge configuration, and other configurations are possible, as pointed out at the beginning of the post.

Finally, we should be able to do some ping tests to verify connectivity between the Edge nodes and the ESXi hosts on the overlay network. This is similar to what we did in part 1, step 9 where we made sure the ESXi hosts were able to ping each other. Repeat this test, but this time ensuring that you can reach between the ESXi hosts and the Edge nodes, e.g.

[root@w4-hs4-i1501:~] vmkping ++netstack=vxlan 192.168.102.1
PING 192.168.102.1 (192.168.102.1): 56 data bytes
64 bytes from 192.168.102.1: icmp_seq=0 ttl=64 time=0.343 ms
64 bytes from 192.168.102.1: icmp_seq=1 ttl=64 time=0.177 ms
64 bytes from 192.168.102.1: icmp_seq=2 ttl=64 time=0.186 ms

--- 192.168.102.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.177/0.235/0.343 ms

[root@w4-hs4-i1501:~]

At this point everything looks good. We have successfully created the Edge cluster, and it is able to communicate with the hosts via the overlay network. We are now ready to setup our Tier-0 Gateway which will be explained in part 3.

Exit mobile version