iSCSI on vSAN Stretched Cluster

vSAN readers will most likely be aware that we introduced support for iSCSI on vSAN way back in vSAN 6.5. That is to say, we had the ability to create iSCSI targets and LUNs using vSAN objects, and present the LUNs to external iSCSI initiators. That release also supported Persistent Group Reservations (PGRs) but it did lack transparent failover. We followed this up with an enhancement in vSAN 6.7 which enabled transparent failover. This enabled support for features like Windows Server Failover Cluster (WSFC) to work on iSCSI on vSAN, if using shared disk mode as it uses reservations on…

vSAN Stretched Cluster and Horizon View interop update

This question has come up a number of times in the past. However, there have been some updates that I personally was not aware of until last week. To cut to the chase, Horizon View 7 (all clone types) is supported with vSAN stretch cluster. This is good news. However, it is very important that customers should follow the Horizon View Reference Architecture (RA) design document and test the scalability of  Horizon 7 and vSAN Stretched Clusters in their environment.

Big news from Pure Storage

There was lots of big news yesterday from our friends over at Pure Storage. First of all, we had an announcement about their Virtual Volume (VVol) implementation going GA. This is very exciting for me, and I look forward to testing it out in our lab. The implementation is a VASA 3.0 implementation, which means support for array based replication (ABR). My good pal Pete Fletcha did a great write-up on the announcement here. And of course, Cody Hosterman of Pure Storage also gives us his low-down on what this VVol implementation means to him. One thing is for sure,…

Supporting Fault Tolerance VMs on vSAN Stretched Cluster

During one of our many discussions at VMworld 2017, I was asked about supporting Fault Tolerance on vSAN Stretched Clusters, more specifically SMP-FT. Now to be clear, we can support SMP-FT on vSAN since version 6.1. The difficulty with supporting SMP-FT on vSAN stretched cluster has always been the possible latency between the data sites, which could be up to as much as 5ms. This is far too high to support SMP-FT on a VM that has data replicating between data sites, and for that reason, we stated categorically that we could not support SMP-FT on VMs deployed on vSAN…

How many hosts are needed to implement SFTT in vSAN Stretched Cluster?

Many of you who are well versed in vSAN will realize that we released a Secondary Failures To Tolerate (SFTT) feature with vSAN 6.6. This meant that not only could we tolerate failures across sites, but that we could also add another layer of redundancy to each copy of the data maintained at each of the data sites. Of course the cross site replication (now referred to as PFTT or Primary Failures To Tolerate) is still based on RAID-1 mirroring and this continues to require a third site for the witness appliance, so that quorum can be obtained in the…

New vSAN Stretched Cluster Topology now supported

After publishing the vSAN Networking Guide earlier this year, Paudie O’Riordain and I received numerous questions regarding support on having different stretched clusters hosting each other’s witness appliances. These queries arose because we discussed a 2-node (ROBO) topology which allowed this to sort of configuration (i.e. the cross hosting of witnesses) via special request. But in the networking guide, we explicitly stated that this was not supported with vSAN stretched clusters. However, after some further testing by our engineering teams, we can now relax this restriction if there are 4 independent sites hosting the different stretched clusters. In this case,…

Erasure Coding and Quorum on vSAN

I was looking at the layout of RAID-5 object configuration the other day, and while these objects were deployed on vSAN with 4 components, something caught my eye. It wasn’t the fact that there were 4 components, which is what one would expect since we implement RAID-5 as a 3+1, i.e. 3 data segments and 1 parity segment. No, what caught my eye was that one of the components had a different vote count. Now, RAID-5 and RAID-6 erasure coding configurations are not the same as RAID-1. With RAID-1, we deploy multiple copies of the data depending on how many…