VSAN 6.0 Part 8 – Fault Domains

One of the really nice new features of VSAN 6.0 is fault domains. Previously, there was very little control over where VSAN placed virtual machine components. In order to protect against something like a rack failure, you may have had to use a very high NumberOfFailuresToTolerate value, resulting in multiple copies of the VM data dispersed around the cluster. With VSAN 6.0, this is no longer a concern as hosts participating in the VSAN Cluster can be placed in different failure domains. This means that component placement will take place across failure domains and not just across hosts. Let’s look at this in action.

In this example, I have a 4 node cluster. I am going to create 3 default domains. The first fault domain contains one host, the second fault domain also contains one host, and the third fault domain has two hosts. It looks something like this:

A1Of course, this isn’t a very realistic setup, as you would typically have many more hosts per rack, but this is what I had at my disposal to test this feature. However, the concept remains the same. The idea now is to have VSAN deploy virtual machine components across the fault domains in such a way so as a single rack failure will not make the VM inaccessible, in order words maintain a full copy of the virtual machine data even when a rack fails.

The first step is to setup the fault domains. This is done in the vSphere web client under Settings > Virtual SAN > Fault Domains:

A2Using the green + symbol, fault domains with hosts can be created. Based on the design outlined above, I ended up with a fault domain configuration looking like this:

A5Now in my configuration, each hosts has 2 magnetic disks (HDDs) so I decided that in order to use as much of the hardware as possible, I would create a VM Storage Policy with StripeWidth (NumberOfDiskStripesPerObject) = 3 and FTT (NumberOfFailuresToTolerate) = 1. I then deployed a virtual machine with this policy. I then examined the policy after the VM was deployed. First I made sure that the VM was compliant to the policy, in other words VSAN was able to meet the StripeWidth and FTT requirements in the policy, which it was (VM > Manage > Policies):

A14I then checked the placement of the components using the VM > Monitor > Policies view:

A15As we can see, one copy of the data (RAID 0, 3 way stripe) resides on host 1 and 2, and the other copy of the data (RAID 0, 3 way stripe) resides on hosts 3 and 4. Both are mirrored/replicated in a RAID 1 configuration. Now, these are the questions we need to ask ourselves:

  •  If rack 1 fails (containing host 1), do I still have a full copy of the data? The answer is Yes.
  •  If rack 2 fails (containing host 2), do I still have a full copy of the data? The answer is Yes.
  •  If rack 3 fails (containing hosts 3 & 4), do I still have a full copy of the data? The answer is still Yes.

What about quorum if rack 3 fails? There are no witnesses present in this configuration, so how is quorum achieved? Well this is another new enhancement in VSAN 6.0 whereby, under certain conditions, components can have votes rather than rely on witnesses. I discussed the new quorum behaviour in this earlier post.

Fault domains, a nice new addition to Virtual SAN 6.0. Previously with FTT, we stated that you needed ‘2n + 1’ hosts to tolerate ‘n’ failures. With fault domains, you now need ‘2n + 1’ fault domains to tolerate ‘n’ failures.

7 Replies to “VSAN 6.0 Part 8 – Fault Domains”

  1. Hi Cormac,

    great article!

    If we configure only two fault domains with two host each, the default storage policy is not respected since it requires 3 fault domains, right ? What’s the best practice in this case ?

    Thanks,
    Nicola

    1. You’ll need a minimum of 3 x FDs as you said, so FD 1 will contain 1 host, FD 2 will contain 1 host, and FD 3 will contain 2 hosts (or some configuration like this).

      1. so i’ll have FD 1 with host 1, FD 2 with host 2 both in the same rack and FD 3 with host 3 and 4 in the second rack (eventually even in a different room).

        1. That’s not really going to give you anything Nicola. If rack #1 fails, you’ll lose two FDs, and thus access to all your VMs. You would need 3 racks to set this up.

          1. Thanks Cormac, now it’s more clear. In such situation what would be the ‘best’ solution? I’m trying to set up a lab environment for some tests.

          2. If you create 3 x FDs like I mentioned, with host 1 => FD1, host 2 => FD2, and hosts 3 & 4 in FD3, then with an FTT=1 & SW=1, you should never see two components belonging to the same VM deployed on hosts 3 and 4.

            The components (assuming 3 per VM) should be deployed:

            Component 1 => FD1 => host 1
            Component 2 => FD2 => host 2
            Component 3 => FD3 => either host 3 or 4.

            This way, when an FD fails, you still have a majority of components and the VM remains accessible.

Comments are closed.