In a previous post related to multiple failures in a vSAN stretched cluster, we showed that if a failure caused the data components to be out of sync, the most recent copy of the data needs to recover before the object becomes accessible again. This is true even if there are a majority of objects available (e.g. old data copy and witness). This is to ensure that we do not recover the “STALE” copy of the data which might have out of date information. To briefly revisit the previous post, the accessibility of the object when there are multiple failures…
Sometime back I wrote an article that described what happens when an object deployed on a vSAN datastore has a policy of Number of Failures to Tolerate set to 1 (FTT=1), and multiple failures are introduced. For simplicity, lets label the three components that make up our object with FTT=1 as A, B and W. A and B are data components and W is the witness component. Let’s now assume that we lose access to component A. Components B & W are still available, and the object (e.g. a VMDK) is still available. The state of these two components (B…