Let’s start with a diagram showing all three sites in a VSAN stretched cluster.
- The VM network. In all likelihood, you will want this network to be a stretched L2 network (switched), to void any re-IP’ing of the virtual machines if/when they are started on a different data site. You only need to consider this network in the context of the data sites, as you will not run protected VMs from the VSAN stretched cluster on the witness site. The VMs will only run on the data sites.
- The vMotion network. This can be either a stretched L2 network (switched) or a L3 network (routed). You only need to consider this network in the context of the data sites, as you will not migrates protected VMs from the VSAN stretched cluster to the witness site. The VMs will only migrate between the data sites. In case you were not aware, vMotion is fully supported over L3 since vSphere 6.0. My good pal Duncan wrote about it here.
- The management network. So, the vCenter server will need to be able to manage the hosts on the data sites as well as the witness host. This may be a combination of L2 and/or L3. So long as the vCenter server can reach the ESXi hosts in a supported way, and add them to its inventory to manage them, there should be no other considerations.
- The VSAN network. OK, this is where things become interesting. With the release of VSAN 6.0, VMware supports both L2 and L3 for VSAN network connectivity. The same is true for VSAN 6.1 stretched clusters, but with some caveats. VMware support L2 between the data sites for the VSAN network, with L3 to the witness sites. VMware also supports L3 everywhere, between data sites and between data sites and witness. What we do not support is L2 everywhere. The reasons will become clear in the next section.
L2 VSAN network between data sites and L3 to the witness
Considering that you have perhaps already implemented an L2 stretched network between data sites for the virtual machines, implementing a stretched L2 network for the VSAN network would be VMware’s recommendation. However, we would only recommend the stretched L2 between the data sites and not stretching it to include the witness site. The reason for this is given later on. One additional point to make is that most customers will typically have their VSAN network on a completely different network to the management network, and thus the default gateway. This will mean that customers will need to add static routes to allow the VSAN network on the data sites to reach the VSAN networks on the witness site, and vice versa, as there will be no path to the remote VSAN network via the default gateway on the host. Such as config might look something like this very simplistic diagram, with VSAN traffic switched between the data sites but routed to the witness site.
While recommending the L2 stretched VSAN network between the data site and L3 to the witness site, VMware also supports L3 between all of the sites. However, since VSAN 6.1 continues to use multicast between the data sites, you will also need to ensure that the multicast traffic is successfully routed over L3 for the cluster to form correctly. This adds more complexity to the mix, and thus why VMware recommends L2 between the data site, and L3 from the data sites to the witness site (note that traffic between the data sites and the witness is unicast, not multicast). Again, static routes will be necessary to implement such a configuration. In our lab testing, we also found that IGMP v3 is much more stable than IGMP v2 in these configurations, so if your switch supports both, certainly use IGMP v3 for the VSAN connections.
Why L2 everywhere is not a good idea
I alluded to this earlier; why not simply stretched the L2 everywhere? The concern here is that VSAN supports the witness site with a much higher latency, and a much lower bandwidth, when compared to the data sites. Now consider if the inter-data site link is lost. It is now possible for the VSAN traffic on data site A to reach data site B via the witness site. And if the witness site has been implemented with a lower bandwidth/higher latency, this will degrade the performance of the whole cluster. There are of course other considerations with such a topology, as there are now loops possible in the network with multiple paths to the same destination, e.g. Site A -> Site B, and Site A -> Witness Site -> Site B. This will probably necessitate the implementation of Spanning Tree Protocol (STP), or similar, to avoid these network loops, and have one of the paths disabled while the other path is active. For those reasons, we are recommending customers to avoid a fully stretched L2 network across all sites for the VSAN network in stretched clusters.
More information
I’ll do a number of additional posts in the coming weeks on stretched VSAN. VMware will shortly be releasing a VSAN Stretched Cluster Guide, which will describe the network topologies and configuration in more detail. If considering a VSAN stretched cluster deployment, you should most definitely review this document before starting the deployment.