VSAN 6.0 Part 4 – All-Flash VSAN Capacity Tier Considerations

In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs.  However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.

Continue reading

More Virtual Volumes (VVols) and Snapshots goodness

VVolsWell, I got so many questions about my previous articles on a new way of doing snapshots with VVols that I decided to take the time and get even deeper into their behaviour. In this setup, I take a Windows 2008 Guest OS running in a virtual machine  deployed on an NFS datastore, and I compare it to an identical VM deployed on a VVol datastore. This is purely from looking at how we do snapshots. Remember with VVols, snapshots always run on the base disk, compared to the traditional way of doing snapshots where the VM always run on the top-most delta in the chain.

Continue reading

VSAN 6.0 Part 3 – New Default Datastore Policy

One of the most common issues I got questions about in VSAN 5.5 was “why is VSAN deploying thick disks, when all of the documentation stated that VSAN deploys thin disks”?

The answer was quite straight forward, and was due to the fact that the VMs were being deployed without a VM Storage Policy. This meant that it went through the standard VM deployment wizard which offered administrators the option of thin, lazy-zeroed thick (LZT) and eager-zeroed thick (EZT). The default option is LZT, which if you just do click-click-click (just like I do) when deploying a VM, then you end up deploying a LZT format VM, even on the VSAN datastore. I described this issue in this older blog post. Its only when you select an actual VM Storage Policy when deploying a VM that VSAN uses the Object Space Reservation capability, which by default is 0%, meaning that the VM is effectively thinly provisioned. We realized that this was causing some issues for customers so we improved this whole deployment mechanism in 6.0 with the introduction of Datastore Default policies.

Continue reading

VSAN 6.0 Part 2 – v2 On-disk Format Upgrade Considerations

I was heavily involved in the documentation effort for VSAN 6.0, but I know that not everyone likes to RTFM, so to speak. What I thought I would do in this post is give an overview of the upgrade process, and highlight some considerations. But I really would urge you to read through the VSAN 6.0 Administrators Guide, and perhaps the VSAN Troubleshooting Reference Manual, especially the sections dealing with upgrades, if you do plan to upgrade from VSAN 5.5 to 6.0. There is  a lot of useful information there.

There are four steps to the upgrade process:

  1. Upgrading vCenter Server to 6.0
  2. Upgrading the ESXi hosts to 6.0
  3. Upgrading the on-disk filesystem format from v1 to v2 (VMFS-L to VirstoFS)
  4. Upgrading the components to v2

Items 1 & 2 are outside the scope of this post. Refer to the generic vSphere 6 documentation on how to do those. Items 3 & 4 are done via a new RVC command that we will discuss in more detail here.

Continue reading

VSAN 6.0 Part 1 – New quorum mechanism

vSphere 6.0 released yesterday. It included the new version of Virtual SAN – 6.0. I now wish to start sharing some of the new features and functionality with you. One of things we always enforced with version 5.5 was the fact that when you deployed a VM with NumberOfFailuresToTolerate = 1, you always had at least 3 components: 1st copy of the data, 2nd copy of the data, and then a witness component for quorum. In version 5.5, for a VM to remain accessible, “one full copy of the data and more than 50% of components must be available”. We have introduced some subtle differences around quorum and VM accessibility to version 6.0.

Continue reading

Virtual Volumes – A new way of doing snapshots

VVolsI learnt something interesting about Virtual Volumes (VVols) last week. It relates to the way in which snapshots have been implemented in VVols. Historically, VM snapshots have left a lot to be desired. So much so, that GSS best practices for VM snapshots as per KB article 1025279 recommends having on 2-3 snapshots in a chain (even though the maximum is 32) and to use no single snapshot for more than 24-72 hours. VVol mitigates these restrictions significantly, not just because snapshots can be offloaded to the array, but also in the way consolidate and revert operations are implemented.

Continue reading

Virtual Volumes – A closer look at Storage Containers

VVolsThere are a couple of key concepts to understanding Virtual Volumes (or VVols for short). VVols is one of the key new storage features in vSphere 6.0. You can get an overview of VVols from this post. The first key concept is VASA – vSphere APIs for Storage Awareness. I wrote about the initial release of VASA way back in the vSphere 5.0 launch. VASA has changed significantly to support VVols, with the introduction of version 2.0 in vSphere 6.0, but that is a topic for another day. Another key feature is the concept of a Protocol Endpoint, a logical I/O proxy presented to a host to communicate with Virtual Volumes. My good pal Duncan writes about some considerations with PEs and queue depths here. This again is a topic for a deeper conversation, but not today. Today, I want to talk about a third major concept, a Storage Container.

Continue reading