First Steps with NSX-T Edge – DHCP server

Now that we have an overlay network deployed, its time to turn our attention to the NSX-T Edge, and get it to do something useful for us. A NSX-T Edge can do many useful things for you (Routing, NAT’ing, etc). But I really want to keep things as simple as possible, so I will deploy my NSX-T Edge to provide DHCP addresses to my VMs. In order to do this, my Edge will first of all need to participate in the same overlay/tunnel network as my hosts. I will then need to create a logical switch that my VMs can connect to, and finally I will need to configure a DHCP server and attach it to my logical switch so that my VM interfaces get IP addresses assigned. Let’s take a look at how I do that.

NSX-T Edge Appliance Networking Setup

Again, I’m not going to cover the deployment steps. You can get these from the actual NSX-T documentation here or pop over to Sam’s blog here for the steps. Kudos also to Kevin Barrass for helping us figure out with some of the configuration steps.

Once deployed, the NSX-T Edge will have 4 network adapters. The first of these is used by the management network. The other 3 interfaces (fp-eth0, fp-eth1 and fp-eth2) can then be used for connecting to overlay networks or for routing. We are not going to look at routing just yet. For the moment we will focus on the overlay. Now for the NSX-T Edge and the ESXi hosts to communicate on the overlay, the NSX-T Edge needs to have a connection to the same VLAN/network that the ESXi hosts are using for the overlay. So you will need to create a portgroup on a VSS/VDS which has an uplink that can reach this VLAN/network. This is what my NSX-T Edge configuration looked like before I begun to add it to my NSX fabric. If you do not have a VM portgroup on a VSS/VDS that can reach the overlay network setup on the hosts, you will have to create one and connect it to one of the interfaces on the NSX-T Edge as shown below.

Do not worry about the router connection for now. My router will eventually be on my management network, as this is how I will be connecting my internal networks to the outside world (which I will try to cover in a later post if I can).

Adding the NSX-T Edge to the NSX Fabric

The easiest way to do this is to run the following 2 commands, one on the NSX-T Manager and one on the NSX-T Edge. The first step is to get the “thumbprint” and the second is to join the Edge to the fabric. Login to the manager as admin:

nsx-manager> get certificate api thumbprint

75e7dfb8414496b984ff57cdf549ec889a4ca8ae9fbc7aaf206dc1679d2a1949

nsx-manager>

On the NSX-T edge, login as admin and run this command:

nsx-edge> join management-plane nsx-manager username admin thumbprint \ 75e7dfb8414496b984ff57cdf549ec889a4ca8ae9fbc7aaf206dc1679d2a1949

Password for API user:********

Node successfully registered as Fabric Node: fc943c22-3e29-11e8-98a4-005056bcba93

nsx-edge>

The NSX-T Edge should now be visible in the NSX-T Manager UI, under Fabric: Nodes, then via the Edges view.

Create an Edge Profile

Just like we did with the hosts, the Edge will also need it owns profile. This is different to the host profile. The first is that there is no need to add a VLAN ID since the traffic is already tagged at the vSwitch layer. So we can set the VLAN to 0. The second is the MTU size. This is set to 1500 rather than 1600. This is what my Edge Profile looks like:

Making the Edge a Transport Node

Again, just like we did with the hosts, the Edge will now be configured as a Transport Node, and added to the same Transport Zone as the hosts. Go to Fabric: Nodes, click on Transport Nodes and click +ADD. Below is the General view, where we simply provide the name, the node (which is our new Edge) and move the Transport Zone (same one used by our hosts) from Available to Selected.

Next, click on the N-VDS part to fill in the details about the hostswitch.  Here we add the Edge Switch Name which is the same as the hostswitch. The “overlay-switch” is, in this example, the name of the hostswitch that was configured for the hosts when making them into Transport Nodes. It just now means that the hosts and the edge are on the same tunnel.

The uplink profile is the one we just created earlier. The IP assignment IP Pool, “hosts-teps-pool”, is the same one I used when making the ESXi hosts Transport Nodes.

Finally, select a network adapter on the edge to connect to the overlay network. fp-eth1 below is network adapter 2 on my Edge as seen from vSphere, and this is the one I connected to the portgroup (nsx-t-edge-connect) that can reach the network used by the hosts for the overlay. Here are the details that I used.

When the above is saved, the Edge should now be visible as a transport node, alongside the hosts.

Verify Edge to Host overlay connectivity

The Edge should be assigned the next available IP address from the IP pool above. My hosts were assigned 192.168.190.1-4 respectively, so my Edge should have been assigned 192.168.190.5. A way of checking is to SSH onto your Edge as the admin user, and run the following commands. First, get the VRF id of the TUNNEL, and use that to get the list of interfaces. This should verify that the Edge tunnel/overlay connection is indeed the next IP in the IP pool range, in this case 192.168.190.5.

nsx-edge> get logical-routers
Logical Router
UUID                                   VRF    LR-ID  Name   Type     Ports
736a80e3-23f6-5a2d-81d6-bbefb2786666   0      0             TUNNEL   3

nsx-edge> vrf 0
nsx-edge(vrf)> get interfaces
Logical Router
UUID                                   VRF    LR-ID  Name   Type
736a80e3-23f6-5a2d-81d6-bbefb2786666   0      0             TUNNEL
interfaces
    interface   : 9fd3c667-32db-5921-aaad-7a88c80b5e9f
    ifuid       : 262
    mode        : blackhole

    interface   : 005c7fff-29b3-5355-b4a2-89b4a0e9216b
    ifuid       : 309
    name        :
    mode        : lif
    IP/Mask     : 192.168.190.5/24
    MAC         : 00:50:56:bc:58:a3
    LS port     : c9470dfe-cbac-581e-9b64-67dbfab6c056
    urpf-mode   : NONE
    admin       : up
    MTU         : 1600
    interface   : f322c6ca-4298-568b-81c7-a006ba6e6c88
    ifuid       : 261
    mode        : cpu

nsx-edge(vrf)>

The final test is to verify that we can ping the Edge interface from the ESXi hosts, and vice-versa. From the Edge session above, you can use a normal ping command, but from the ESXi hosts, you need to specify that this is the special “Geneve” tunnel/overlay network, so you will need to use the ++netstack=vxlan option, e.g.:

vmkping ++netstack=vxlan 192.168.190.5

Check out the previous post on overlays if you need more guidance on this step. If that succeeds, congratulations. You just joined your edge to the same overlay as your hosts.

Create an Edge Cluster

In order to get our Edge to do anything useful, such as facilitate DHCP, it needs to be part of an Edge cluster. Even though we only have one Edge, this is still a necessary step. Navigate to Fabric: Nodes, select Edge Cluster, then +ADD, and simply add your Edge to the Edge Cluster. The Transport Node type needs to be changed from Physical Hosts to Virtual Machines for it to become visible. Then move it from Available to Selected, like so:

Set up DHCP

Now we are ready to set up DHCP. The are 5 steps to getting DHCP deployed.

  1. Create a DHCP Server Profile
  2. Create a DHCP Server
  3. Add an IP Pool to the DHCP Server
  4. Create a Logical Switch
  5. Attach the DHCP Server to the Logical Switch

Let’s get going.

Create a DHCP Server Profile

In the NSX Manager UI, navigate to DDI: DHCP and select Server Profiles. Click on +ADD and give the profile a name. If you only have a single Edge Cluster, this will be automatically populated with the name of the Edge Cluster created previously. Select which Edge Transport Node to use for this profile. Again, in my setup there is only one. Finally, save the profile.

Create a DHCP Server

Navigate to DDI: DHCP and select Servers, then click on +ADD to add a DHCP Server. All that is required for the DHCP Server is a name and an IP address. I have given it an IP address of 192.168.191.2 as I plan to use 192.168.191.1 as my gateway later for getting to the outside world. I haven’t populated any other fields. Now save the DHCP Server.

Add an IP Pool to the DHCP Server

This part was a little confusing. You have to click on the name of the DHCP server, and on the right hand side which has the DHCP Server Overview details, there is an IP Pools section. Expand this section, and click on +ADD to add an IP range for your DHCP server. I have chosen the range 192.168.191.100-192.168.191.200. I have also set a default gateway of 192.168.191.1. Leave the rest of the settings at the defaults. Save the IP Pool.

Create a Logical Switch

In this step, we are creating a logical switch. This is different to the hostswitch/N-DVS that we had to create earlier for the Transport Nodes. The logical switch will appear as a portgroup in vSphere, and we will be able to connect our Virtual Machines network interfaces to it. VMs connected to the same logical switch will be able to communicate via the overlay network. Navigate to Switching: Switches and click on +ADD to create a logical switch. You simply need to provide a name (which will also be the portgroup name) and the Transport Zone (the same Overlay transport zone we have been using all along). There is VLAN requirement, and Admin Status (Up) and Replication Mode (Hierarchical) can be left at the default. Save the Logical Switch.

Attach the DHCP Server to the Logical Switch

Our final step is to attach the DHCP Server to the Logical Switch. If our DHCP server is configured correctly and attached to the  logical switch, it should be able to provide our VMs that are attached to this logical switch/portgroup with an IP address. Simply select the Logical Switch (check the box against the Logical Switch in the Switching: Switches view, then select the Actions drop-down, and click on Attach to a DHCP Server.

From the pop-up, simply select the DHCP Server that we created earlier and click save.

This step could also have been done from the DHCP server, where its actions list allows it to be attached to a Logical Switch. It doesn’t really matter which method you choose.

Verifying Switch and DHCP functionality

The first thing we can check is the DHCP server port on the Logical Switch. Click on the Logical Switch, and in the Overview section on the right hand side, you should see the port listed if you select the Related: Ports view.

Next, we can go to our vSphere environment, and deploy a VM that will use the logical switch/portgroup and see if it gets allocated an IP address. I chose a Windows VM, and simply picked the logical switch/portgroup as follows. Note the name of the portgroup is the same as the logical switch.

Now if I open a console to that VM, I can check the IP addresses, etc. In this VM, I can see that this VM has indeed been assigned an IP address from the DHCP range configured earlier.

Excellent. Our Edge is doing exactly what we wanted it to do. And I can also ping the DHCP server from this VM:

Now we have not yet set up any gateway, so there is no point in pinging that just yet. The next thing we can do is to verify that I can also do VM to VM communication. Therefore I configured another VM to use the logical switch/portgroup, and it also get its IP address from our DHCP server.

And as a final check, can we now ping our other VM from this VM on the tunnel?

Success! These VMs are also visible on the Logical Switch back in NSX Manager. Navigate to Switching: Switches, click on the overlay logical switch, then in the Related section, drop down and select Ports. Here you will see the VMs that are connected to the logical switch/portgroup, as well as the DHCP server port.

Great. Now we have an NSX-T Edge, participating as a Transport Node in our Transport Zone Overlay/Tunnel. We then configured our NSX-T Edge to provide DHCP functionality, attached it to a Logical Switch, which in turn allowed this service to be consumed by our VMs. Very nice.

Stay tuned while I figure out my next steps, which is basically to get this overlay network/tunnel  routed to the outside world.

Comments are closed.