vSphere 5.1 Storage Enhancements – Part 10: 5 Node MSCS Support

This is not really a storage feature per-se, but I am including it in this series of vSphere 5.1 storage enhancements simply because most of the work to support this 5 node Microsoft cluster framework was done in the storage layer.

Although most of the framework was in place since vSphere 4.0 (support for SCSI-3, LSI SAS Controller in the Virtual Hardware, support for PGRs), a number of additional improvements were required before we could scale out to supporting 5 nodes instead of the 2 nodes we supported in the past.

I also need to call out that this is for failover clusters only. In a failover cluster, if one of the cluster nodes fails, another node begins to provide service (a process known as failover). It should be noted that users will experience a temporary disruption in service when this occurs.

4-Node cluster testing was done in addition to 5-Node testing.  This is because quorum models to be used are different for these configurations.

  • 4-Node clusters use a Node and Disk Majority Model
  • 5-Node cluster use a Node Majority Model.

With a 3 or 5 node cluster (since each host has a vote), establishing quorum isn’t an issue, Even if the cluster was to partition into 3 nodes and 2 nodes (i.e. split-brain), the side with the most votes takes ownership. This is called a majority node set quorum, and the quorum data is stored locally on the system disk of each cluster node.

In a 4 node cluster, we could end up in a situation where there are 2 nodes/votes on either side of the cluster. In this case, we use a majority node & disk set quorum, where the quorum data is stored locally on the system disk of each cluster node but is also stored in a shared disk accessible by all hosts. This shared disk (also known as the witness disk) has the deciding vote when there is a split- brain scenario.

The enhancements made to vSphere 5.1 allows up to 5 participating nodes. We tested both 4 node and 5 node configurations as the quorum models are different depending on whether the number of nodes in the cluster is odd or even. A majority node set cluster can handle up to 2 node failures out of 5.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan