Supported network topologies for VSAN stretched cluster

vsan_stretch_graphic_v02_300dpi_01_square140As part of the Virtual SAN 6.1 announcements at VMworld 2015, possibly the most eagerly anticipated announcement was the support for a VSAN stretched cluster configuration. Now VSAN can protect your virtual machine across data centers, not just across racks (which was achievable with fault domains introduced in VSAN 6.0). I’ve been hearing requests from customers to support this since the initial VSAN beta, so it is definitely a welcome addition to the supported configurations. The obvious next question is how do I set it up. Well, first of all, you will need to make sure that you have a validated network topology between all the sites; the two data sites which will host copies of your data and of course the witness site, which will host the witness components of you virtual machine objects. If you are unsure about what a witness does, have a read of this earlier blog post here.

Let’s start with a diagram showing all three sites in a VSAN stretched cluster.

1. introNow there are a few supported topologies that could be implemented, but lets start with a discussion on the different networks that you might be using.

  1. The VM network. In all likelihood, you will want this network to be a stretched L2 network (switched), to void any re-IP’ing of the virtual machines if/when they are started on a different data site. You only need to consider this network in the context of the data sites, as you will not run protected VMs from the VSAN stretched cluster on the witness site. The VMs will only run on the data sites.
  2. The vMotion network. This can be either a stretched L2 network (switched) or a L3 network (routed). You only need to consider this network in the context of the data sites, as you will not migrates protected VMs from the VSAN stretched cluster to the witness site. The VMs will only migrate between the data sites. In case you were not aware, vMotion is fully supported over L3 since vSphere 6.0. My good pal Duncan wrote about it here.
  3. The management network. So, the vCenter server will need to be able to manage the hosts on the data sites as well as the witness host. This may be a combination of L2 and/or L3. So long as the vCenter server can reach the ESXi hosts in a supported way, and add them to its inventory to manage them, there should be no other considerations.
  4. The VSAN network. OK, this is where things become interesting. With the release of VSAN 6.0, VMware supports both L2 and L3 for VSAN network connectivity. The same is true for VSAN 6.1 stretched clusters, but with some caveats. VMware support L2 between the data sites for the VSAN network, with L3 to the witness sites. VMware also supports L3 everywhere, between data sites and between data sites and witness. What we do not support is L2 everywhere. The reasons will become clear in the next section.

L2 VSAN network between data sites and L3 to the witness

Considering that you have perhaps already implemented an L2 stretched network between data sites for the virtual machines, implementing a stretched L2 network for the VSAN network would be VMware’s recommendation. However, we would only recommend the stretched L2 between the data sites and not stretching it to include the witness site. The reason for this is given later on. One additional point to make is that most customers will typically have their VSAN network on a completely different network to the management network, and thus the default gateway. This will mean that customers will need to add static routes to allow the VSAN network on the data sites to reach the VSAN networks on the witness site, and vice versa, as there will be no path to the remote VSAN network via the default gateway on the host. Such as config might look something like this very simplistic diagram, with VSAN traffic switched between the data sites but routed to the witness site.

VSAN SC L2 + L3L3 VSAN network everywhere

While recommending the L2 stretched VSAN network between the data site and L3 to the witness site, VMware also supports L3 between all of the sites. However, since VSAN 6.1 continues to use multicast between the data sites, you will also need to ensure that the multicast traffic is successfully routed over L3 for the cluster to form correctly. This adds more complexity to the mix, and thus why VMware recommends L2 between the data site, and L3 from the data sites to the witness site (note that traffic between the data sites and the witness is unicast, not multicast). Again, static routes will be necessary to implement such a configuration. In our lab testing, we also found that IGMP v3 is much more stable than IGMP v2 in these configurations, so if your switch supports both, certainly use IGMP v3 for the VSAN connections.

Why L2 everywhere is not a good idea

I alluded to this earlier; why not simply stretched the L2 everywhere? The concern here is that VSAN supports the witness site with a much higher latency, and a much lower bandwidth, when compared to the data sites. Now consider if the inter-data site link is lost. It is now possible for the VSAN traffic on data site A to reach data site B via the witness site. And if the witness site has been implemented with a lower bandwidth/higher latency, this will degrade the performance of the whole cluster. There are of course other considerations with such a topology, as there are now loops possible in the network with multiple paths to the same destination, e.g. Site A -> Site B, and Site A -> Witness Site -> Site B. This will probably necessitate the implementation of Spanning Tree Protocol (STP), or similar, to avoid these network loops, and have one of the paths disabled while the other path is active. For those reasons, we are recommending customers to avoid a fully stretched L2 network across all sites for the VSAN network in stretched clusters.

More information

I’ll do a number of additional posts in the coming weeks on stretched VSAN. VMware will shortly be releasing a VSAN Stretched Cluster Guide, which will describe the network topologies and configuration in more detail. If considering a VSAN stretched cluster deployment, you should most definitely review this document before starting the deployment.

5 Replies to “Supported network topologies for VSAN stretched cluster”

  1. Hi Cormac,

    Does a VSAN Stretched Cluster support NSX to stretch L2 networks over L3?

    It would be great if there was a low cost easy to deploy version of NSX for this exact purpose that could be bundled with VSAN.

    Many thanks as always
    Mark

    1. These conversation are ongoing at the moment Mark.
      [Edit] I should have said that you can definitely use NSX with stretched VSAN. The conversations are around the bundling. If you are planning to use NSX with stretched VSAN, let me know. I’d be interested in learning more about what you are planning.

  2. Hi Cormac,

    In ROBO configuration, can you attach 2 nodes directly back to back using 10GbE NICs for vSAN traffic? This way removing the requirement for a 10GbE switch? A virtual router then could be used on the cluster to make sure that vSAN traffic can reach the the witness appliance back in the data center over L3. With how small the traffic footprint is between the cluster and the witness, this should work just fine, but would this be considered a supported configuration? 🙂

Comments are closed.