Some useful NSX Troubleshooting Tips

nsx-logoI’ve been working on some Disaster Recovery (DR) scenarios recently with my good pal Paudie. Last month we looked at how we might be able to protect vCenter Operation Manager, by using a vApp construct and also using IP customization. After VMworld, we turned our attention to NSX, and how we might be able to implement a DR solution for NSX. This is still a work in progress, but we did learn some very useful NSX troubleshooting commands that I thought would be worth sharing with you.

1. ping ++netstack=vxlan

If you are going to create a logical network across multiple clusters and perhaps different VLANs, it’s probably a good idea to verify that you can successfully reach all of the VTEPs (NSX VXLAN Tunnel End Point). NSX Controllers tell the VTEPs everything it needs to know to connect its physical ports to virtual networks. Use the ping ++netstack=vxlan command to do this. In my environment, vmk5 was the VMkernel NIC used for VXLAN encapsulation, so I need to be able to ping to hosts in another VLAN, and also the controllers to ensure everything was working. The command esxcfg-vmknic -l will list the VMkernel ports on an ESXi host, along with its IP address. Note that will also need to specify VXLAN as the netstack, or else the ping won’t work (see below):

~ # ping -I vmk5 172.24.150.100
Unknown interface 'vmk5': Invalid argument
~ # ping ++netstack=vxlan -I vmk5 172.24.150.100
PING 172.24.150.100 (172.24.150.100): 56 data bytes
64 bytes from 172.24.150.100: icmp_seq=0 ttl=60 time=125.298 ms
64 bytes from 172.24.150.100: icmp_seq=1 ttl=60 time=125.208 ms
64 bytes from 172.24.150.100: icmp_seq=2 ttl=60 time=125.059 ms

--- 172.24.150.100 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 125.059/125.188/125.298 ms
~ #

2. Get controller status

When you can successfully ping all the VTEPs and NSX controllers, check that the VTEP successfully establishes a connection to the controller. Using the esxcli network vswitch dvs vmware vxlan network list –vds-name $vds-name command on the ESXi host, you can see the controller and the fact that it is up.

~ # esxcli network vswitch dvs vmware vxlan network list --vds-name mia
VXLAN ID Multicast IP              Control Plane                       \
-------- ------------------------  ----------------------------------  \ 
5001     N/A (headend replication) Enabled (multicast proxy,ARP proxy) \
Controller Connection Port Count MAC Entry Count ARP Entry Count  MTEP Count
--------------------- ---------- --------------- ---------------- ----------
172.24.150.106 (up)   2          0               0                0
~ #

3. Port 1234

Another useful command is to check the network connections to make sure that the netcpa-worker on the ESXi host and the controller are communicating over TCP on port 1234. The following command will help you to do this, and you need to see that the connection is ESTABLISHED.  If the state is SYN_SENT, it implies that there are additional communication issues to resolve.

~ # esxcli network ip connection list| grep tcp | grep 1234
tcp   0    0  172.25.133.17:41219    172.24.150.108:1234  ESTABLISHED  ..  
tcp   0    0  172.25.133.17:22981    172.24.150.106:1234  ESTABLISHED  ..                 
~ #

4. Packet Traces with pktcap-uw

ESXi ships with a packet capture utility called pktcap-uw. This allows you to capture packet traces from switch ports and up-links. When capturing traces from VXLAN, ensure that you specify the correct segment id via –vxlan <segment id>. This will allow you to trace packets leaving one ESXi host (perhaps the host where the NSX edge is providing a service like DHCP) and arriving into another ESXi host (perhaps the host where there is a VM that is trying to pick up an IP Address via DHCP). The VXLAN id can be found in the UI or from the esxcli command above. The later versions of Wireshark can now also display VXLAN transport information.

wireshark5. Force a Re-register of VMs

This next tip is very useful if you have a situation where all of your controllers get deleted and need to be redeployed.

This one had us scratching our heads quite a bit.  We found this during DR testing, when all controllers were on site A, and then we failed over and we started NSX Manager up on site B. Therefore we needed to deploy all our controllers again. This involved deleting the original controllers on site A.

Because of this, the location of all of our existing VXLAN information is lost. Now if we try to pickup a DHCP address from the NSX Edge, it won’t work. We need to get the VXLANs to re-register with the new controllers.  A quick way to do this is to toggle the logical switch from unicast or hybrid mode (which is the mode where controllers are needed) to multicast mode (which is the mode where controllers aren’t needed). This re-registers all of the MAC addresses of your VMs on the controllers. You only need to do this momentarily; flip the logical switch back to unicast/hybrid immediately afterwards. You will need to add a multicast range to your Segment ID pool first though; or else you will get the error: “Unable to allocate an available resource”. Now DHCP (and other services) should start to work again.

 Hopefully you will find some of these troubleshooting tips useful. Kudos to Emiliano Turra for assistance with much of this.

12 Replies to “Some useful NSX Troubleshooting Tips”

  1. When you deploy a new Controller Cluster (which is what you’re doing), it doesn’t know about VXLANs and DLR instances you had. Flipping control plane from Unicast/Hybrid (which requires Controllers) to Multicast (which doesn’t) and back, you’re forcing VXLANs to register with the Controller Cluster.

    You can do it per Logical Switch, or you can do your Transport Zone (or Zones, if you have more than one), and tell it to also change it for Logical Switches that it covers.

    This, however, still leaves out your DLRs. To get them re-registered, you have to “Re-deploy” all your DLRs, one by one.

    1. Good to know Dmitri – thanks. We were testing some DR scenarios and deliberately kept things simple, just to see if we could get it to work. Thanks for the added clarification.

    2. Dmitri, if failing over sites as Cormac is describing, you’d also need to re-deploy edge appliances.

      However if you only lost the controllers for some reason, but your appliances were still deployed, you can also rescync via the API without having to redeploy all DLRs.

      1. Do the multicast to hybrid/unicast trick as described above to register VMs.
      2. Find your cluster IDs via the vCenter MOB browser. Should be something like cluster-83, for example.
      3. Using the API, configure a POST to https:///api/2.0/nwfabric/configure?action=synchronize
      4. Make sure you have basic authentication set along with Content-Type application/xml
      5. In the post body, place each cluster ID between the resourceId tags as below:

      com.vmware.vshield.vsm.messagingInfra

      domain-c83

      6. Perform the post. The cluster should resynchronize and this should sync controllers with host information for VXLANs and DLR configs.

      I had this issue in my lab due to power outages as my basement is nowhere near a tier 4 data center and a few times my controllers lost all information and I had to redeploy.

  2. Great article thanks.

    I would just add the following commands to troubleshooting L2 Bridging:

    Lists MAC table, learnt on both VXLAN and VLAN sides
    net-vdr -b –mac default+edge-1

    Dump statistics
    net-vdr -b –stats default+edge-1

    From Controller
    nvp-controller # show control-cluster logical-routers instance all
    nvp-controller # show control-cluster logical-routers bridges $LR-ID all
    nvp-controller # show control-cluster logical-routers bridge-mac $LR-ID $BR-ID

    Replace edge-1 by the Distributed Router Name where you’ve created your Bridge Instance. And LR-ID/BR-ID respectively by your Logical Router ID and Bridge ID.

    Sébastien.

Comments are closed.