New vSAN Stretched Cluster Topology now supported

After publishing the vSAN Networking Guide earlier this year, Paudie O’Riordain and I received numerous questions regarding support on having different stretched clusters hosting each other’s witness appliances. These queries arose because we discussed a 2-node (ROBO) topology which allowed this to sort of configuration (i.e. the cross hosting of witnesses) via special request. But in the networking guide, we explicitly stated that this was not supported with vSAN stretched clusters. However, after some further testing by our engineering teams, we can now relax this restriction if there are 4 independent sites hosting the different stretched clusters. In this case, we can have the witness for stretched cluster ‘A’ deployed on stretched cluster ‘B’ and vice-versa. Let me explain how this is possible in more detail.

Cross Site Witness Support is not supported on 2 sites hosting 2 stretched cluster implementations

Let’s start with a description of why we cannot have cross site witness support when there are only 2 sites hosting the 2 stretched cluster implementations. Let’s assume the configurations looks something like the following, where SCA (Stretched Cluster A) hosts WB (Witness for Stretched Cluster B) and SCB (Stretched Cluster B) hosts WA (Witness for Stretched Cluster A). Initially, lets assume that both witnesses reside on data site 1. The configuration may look something like this, assuming the orange lines represented cluster A and the green lines represent cluster B:

We can see that each stretched cluster has a dependency on a site. Now lets see what happens when we begin to introduce site failures. Let’s begin by taking down data site 2.

So this is actually OK. Since both witnesses were on data site 1, we still have a copy of the data and since we have the witness, so our VMs remain available.

Now lets take a look at data site 1 failing:

Now now we are completely screwed, since a failure on data site 1 impacts both a copy of the data and the witnesses. So now our VMs are no longer available.

I guess the next test would be to host one witness at each site and see what happens when that fails. Fair enough – let’s begin with a deployment that looks something like this:

And lets introduce another site failure (it doesn’t matter which side, the behaviour will be the same in both cases):

In this case, if Data Site 1 fails, all of the VMs on stretched cluster A fails as they can no longer reach quorum – they’ve lost a data copy and they’ve lost their witness WA. This also means that the witness for stretched cluster B (WB), which currently resides on site 2, cannot reach quorum either; it has lost a copy of its data and its witness which were both on site 1, thus it is also inaccessible. Therefore all of the VMs on stretched cluster B which rely on witness WB can no longer reach quorum – they’ve lost a data copy due to site 1 failing, and their witness WB is now inaccessible too. We have now a complete meltdown, which is why we cannot support such a topology.

Cross Site Witness Support supported on 4 sites hosting 2 stretched cluster implementations

So how do things change if this is implemented across 4 sites instead of 2? Let’s have a look at a sample deployment. Now what we have are 4 unique data sites, where stretched cluster A is implemented across 2 of the sites, and stretched cluster B is implemented across 2 of the sites. There are no dependencies between any of the sites, other than the fact that the witness appliance is hosted on the other stretched cluster, WA on SCB, and WB on SCA.

As before, let’s now introduce a site failure:

This failure of data site 1 has had no impact on data site 3 or data site 4, so SCA continues to have both copies of the data available. In fact, the witness for SCA, which was impacted during the failure of data site 1, will be restarted by vSphere HA on data site 2 (along with the production VMs).

Data site 1 failing impacted one of the data copies for SCB. However it continued to have a full copy of the data on data site 2, and the witness WB was not impacted as this also resided on a separate data site 3. So while VMs that were on data site 1 will need to be restarted on data site 2, there is no complete meltdown as seen previously with a 2 x site, 2 x stretched cluster, witness cross-hosting scenario. This is why we are in a position to now support this new 4-site topology for stretched cluster and cross-witness hosting.

This new topology will be added to the networking guide as soon as possible.

2 comments
    • There is nothing specific here for 6.6 Mike. This topology simply needed tested to verify it would work as expected, which it does.

      But, considering the new functionality that is in 6.6 for stretched clusters, such as the ability to tolerate local failures, I’d strongly recommend 6.6 if at all possible.

Leave a Reply