VSAN with Horizon View Interop

It should come as no surprise but VMware Horizon View is also supported on VSAN. VMware released Horizon View version 5.3.1 to coincide with the vSphere 5.5.U1 and VSAN release. This release allows desktops to be successfully deployed on a VSAN datastore, using default policies for the desktop storage objects. Let’s go through the steps to get this configured and running, and then we can talk about the default policy settings afterwards.

Deployment is pretty much identical to previous versions of VMware View. There are no significant differences. You create your desktop pools just like before. You still select a virtual machine and snapshot combination which would form the ‘replica’ that becomes the basis for your linked clone desktops. However, per the View on VSAN guidelines, this virtual machine and snapshot should use the default VM Storage Policy (in other words, no policy should be selected when the VM is created on a VSAN datastore). Therefore when you examine the replica object’s VM Storage Policy, it should look like the following:

replica-no-policyBut hang on! Didn’t I just say the replica would have a default policy? Then how come it shows the policy as ‘None’ and how come we see a RAID-1 configuration? Well ‘None’ is how the default policy shows up in the UI. I’d agree that it would be nice to show this as ‘default’ instead (I’ll work on this internally). And yes, the default policy does include a capability for RAID-1; it sets Number of Failures To Tolerate to 1. Therefore this replica is highly available and can continue to be accessible even if there is a failure in the cluster (network, disk or host).

The only other different step that’s required is to select the VSAN datastore as your destination (this is during the creation of a persistent linked clone pool):

Select VSAN datastoreWith the desktop pool now created, let’s now focus on the linked clone desktops that get rolled out. Let’s take a look at a desktop which has been deployed using this base replica. It’s VM Storage Policy should look something like this:

linked-clone-desktop - no policyThe linked clone desktops deployed on VSAN have a VM Home Namespace object and 4 hard disk (VMDK) objects. These have the default (None) policy associated. Why are there 4 hard disks? Well, View aficionados will already know this, but for the rest of us, there is (1) the OS disk (which is the clone disk). There is (2) the user data disk or persistent disk which holds Windows profiles so that they are not affected by View Composer operations such as refresh and recompose. There is (3) the internal disk which holds AD and sysprep info, and finally (4) the system disposable disk (SDD) which holds the Windows page-file and temp files.

And if we take a closer look at these hard disks, we will also see that they are all using the default VM Storage Policy which includes a Number of Failures To Tolerate set to 1: linked-clone-disk-no-policyTo recap, View 5.3.1 recommends using the default VM Storage Policy in this initial implementation. I understand that there are lots of plans right now to make these policies more configurable for the different disk types going forward, but this functionality is not in the View 5.3.1 release.

Now, there is a way to change the default policy of VSAN if you so wish. This is only achievable through the esxcli command line however. VMware KB article 2073795 talks about how to do that. Remember though that this changes the default for every VM deployed on VSAN, not just Horizon View replicas and desktops, so caution should be exercised. It will also apply to every VM storage object and disk. To elaborate, if you decide to add 10% of flash read cache to the default policy, every single object rolled out onto the VSAN datastore will get a 10% cache reservation. This include disk objects which would get no benefit from read cache, such as the internal disk. You also risk consuming all of your read cache. The recommendation from VMware is to stick with the default policy, and allow all storage objects to share the read cache equally; those who need it will consume it, those who don’t need will not.

Just to close, there are a few things to keep in mind regarding View and VSAN:

  • View Storage Accelerator/CBRC works just fine with VSAN.
  • There is no longer a need for replica tiering with VSAN. This was a feature which placed replicas on a separate datastore backed by flash devices or SSD drives. This means that read I/O operations are directly served from the flash/SSD layer. VSAN negates the need for such a configuration since all I/O (if the system is properly sized) is serviced by the flash layer.
  • There is no support in this initial release for SE Sparse Disks. Linked clone pools use the original vmfsSparse format.
  • Full clones are supported as well as linked clones (although I don’t specifically call it out in the post)