Site icon CormacHogan.com

Read locality in VSAN stretched cluster

Many regular readers will know that we do not do read locality in Virtual SAN. For VSAN, it has always been a trade-off of networking vs. storage latency. Let me give you an example. When we deploy a virtual machine with multiple objects (e.g. VMDK), and this VMDK is mirrored across two disks on two different hosts, we read in a round-robin fashion from both copies based on the block offset. Similarly, as the number of failures to tolerate is increased, resulting in additional mirror copies, we continue to read in a round-robin fashion from each copy, again based on block offset. In fact, we don’t even need to have the VM’s compute reside on the same host as a copy of the data. In other words, the compute could be on host 1, the first copy of the data could be on host 2 and the second copy of the data could be on host 3. Yes, I/O will have to do a single network hop, but when compared to latency in the I/O stack itself, this is negligible. The cache associated with each copy of the data is also warmed, as reads are requested. The added benefit of this approach is that vMotion operations between any of the hosts in the VSAN cluster do not impact the performance of the VM – we can migrate the VM to our hearts content and still get the same performance.

So that’s how things were up until the VSAN 6.1 release. There is now a new network latency element which changes the equation when we talk about VSAN stretched clusters. The reasons for this change will become obvious shortly.

What if we made no change to read algorithm?

Let’s talk about the round-robin, block offset algorithm in a stretched cluster environment first of all. For the sake of arguments, lets assume that read latency in a standard hybrid VSAN cluster is 2ms. That means that the VM can move between any hosts in this cluster, and latency remains constant. Now we introduce the concept of a hybrid VSAN stretched cluster, where hosts that are part of the same cluster can be located miles/kilometers apart. The guidance that we are giving our VSAN stretched cluster customers is that latency between sites should be no more than 5ms RTT. Therefore a read operation, which was previously 2ms, could now incur up to 2ms + 5ms if it has to traverse the link between the sites. We are looking at a 2-3 fold increase in latency if the original round-robin, block-offset, read algorithm was left in place.

A new read algorithm for VSAN 6.1

However, we have not left the behaviour like that. Instead, a new algorithm specifically for stretch clusters is introduced in VSAN 6.1. Rather than continuing to read in a round-robin, block offset fashion, VSAN now has the smarts to figure out which site a VM is running on, and do 100% of its read from the copy of the data on the local site. This means that there are no reads done across the link during steady-state operations. It also means that all of the caching is done on the local site. This avoid incurring any additional latency as reads do not have to traverse the inter-site link. Note that this is not read locality on a per host basis. It is read locality on a per site basis. On the same site, the VM’s compute could be on one hosts while its local data object could be on another host, just the same as a standard non-stretched VSAN deployment.

What happens on a failure?

We recommend the use of vSphere HA and Host/Affinity rules in VSAN stretched cluster. This means that if there is a host failure on one site, vSphere HA will try to restart the VM on the same site. Thus the VM would still have read locality, and access to the hot/warm cached blocks of data.

If there is a failure that impacts the local copy of the data, then the remote copy (and cache) will be accessed until a replacement copy of the data is rebuilt on the original site. During this time, the expectation is that the virtual machine may not perform optimally until the replica is rebuilt and the cache has re-warmed on the original site. Customers may decide to migrate virtual machines to the new site while the rebuild is taking place.

If there is a catastrophic failure however, and the whole of a site goes down, virtual machines on the failed site are restarted on the remaining site. In this case, reads will come from the replica copy on the remaining site (site locality once again), but cache will be cold. During this time, the expectation is that the virtual machine may not perform optimally until the cache has warmed on the new site. Remember that prior to the failure, 100% of the reads were coming from the original site’s objects, so only the cache on the original site was warm. Now that the VM has failed over to the other site, the cold cache must be warmed.

What about all-flash? No read cache, so what happens?

It is true to say that, since the capacity layer in an all-flash VSAN is also flash, there is no need to cache reads in a dedicated cache layer like we do for hybrid.  However, the read locality behaviour still has a purpose as it allows VMs to read from the local copy of the data rather than traversing the inter-site link. Of course, the same performance considerations arise if the VM has to read across the inter-site link in the event of a failure. One interesting point is that in the event of the VM being restarted on the other site by vSphere HA, it means that there is no need to warm a read cache in the event of a failure. It will be performing optimally immediately after failover.

Exit mobile version