VAAI Comparison – Block versus NAS

I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.

Let’s begin with a brief overview of VAAI. Its primary purpose is to offload certain I/O tasks to the storage array, freeing resources on the ESXi host to do other tasks. So what primitives are there? Here is a table listing the Block, NAS & Thin Provisioning primitives:

What are the main differences between the block and NAS primitives?

Migration and Clones

Let’s begin with comparing the Full File Clone (NAS) primitive and the XCOPY (block) primitive. Both are used when copying data, but there is a significant difference. Storage vMotion operations cannot leverage the Full File Clone primitive on NAS arrays – the primitive can only be leveraged when a cold migration (powered off VM) is initiated. The XCOPY primitive can be leveraged by Storage vMotion for powered on VMs however. This is a major advantage that block storage has, especially when it comes to features like Storage DRS which rely heavily on Storage vMotion functionality for load balancing.

Link Clones

Next up is the Fast File Clone (NAS) primitive. This allows the creation of linked-clones to be offloaded to the storage array. This can assist with VMware View and vCloud Director deployments which can both use linked-clones to back objects like VDI desktops and vCloud vApps respectively. There is no equivalent primitive to offload linked-clone creation to the array for block storage. So this in turn is a major advantage for NAS arrays.

Zero

The primitives Write Same (block) and Reserve Space (NAS) are not the same thing. When creating VMDKs on block, one of the options is to create an eagerzeroedthick VMDK, which means zeroes get written to all blocks making up that VMDK. With the Write Same primitive, the act of writing zeroes is offloaded to the array, meaning we don’t have to send lots of zeroes across the wire, speeding up the process. In fact, for some arrays this is simply a metadata update, meaning a very fast zeroing operation indeed.

Now without VAAI NAS, one never had the ability to pre-allocate or zero out space for VMDKs on NFS; historically the only option available was to build thin VMDKs on NFS. With the introduction of Reserve Space, one can now create thick VMDKs on NFS datastores. However VAAI NAS Reserve Space is not like Write Same for  block; it does not get the array to do the zeroing on its behalf. When creating a VMDK on a VAAI NAS array, selecting Flat sends a Space Reserve NAS VAAI command to the array which guarantees that the space will be available. This is equivalent to VMFS lazyzeroedthick, and the blocks are zeroed on first write. However selecting Flat pre-initialized also does  sends a Space Reserve NAS VAAI command, plus it does ESX-based zero writing to the VMDK – equivalent to a VMFS eagerzeroedthick. This means that it is a slow operation, and any writes are sent over the wire – they are not offloaded. So for zeroing operations, it is safe to say that block arrays have an advantage.

As an aside, we just said that VAAI NAS Reserve Space allows you to create virtual disks in Thick Provision Lazy Zeroed (lazyzeroedthick) or Thick Provision Eager Zeroed (eagerzeroedthick) format on NFS datastores on arrays which support Reserve Space. However, when you check the disk type on the Virtual Machine Properties dialog box, the Disk Provisioning section always shows Thick Provision Eager Zeroed as the disk format no matter which format you selected during the disk creation. ESXi does not distinguish between lazy zeroed and eager zeroed virtual disks on NFS datastores. Just a heads up. We know about it and are looking at a way to distinguish between the two.

Statistics

The next primitive we have is for Extended Statistics (NAS). This was to allow us to query how much space a VMDK actually consumed on an NFS datastore. For example, I might have created a 100GB thin VMDK, but I’m actually only consuming 25GB of space on the array. This was something vSphere previously never had any insight into. This was not a necessary feature to have for VMFS, since vSphere understands VMFS very well. But we did need something like this for NFS.

I’ve heard some customers and partners state that it would be nice to see how much actual  space a VMFS volume is consuming when created on a thin provisioned LUN or volume on the array. Perhaps a future VAAI primitive – who knows.

Monitoring

Statistics for VAAI block primitives are automatically visible in esxtop. One can clearly see counters for ATS (locks), Clone (XCOPY), Zero (Write Same) & Delete (UNMAP), as well as fields indicating failures of said primitives (the columns containing _F). NAS primitives do not have any esxtop counters, making them more difficult to monitor.

Plugin Requirement

Remember that block primitives are available by default on ESXi hosts. NAS primitives require a special NAS plugin from the array vendors, and their implementations of primitives do vary. Check with your array vendor to verify which primitives are included with the plugin.

Locking

Finally ATS is a block only primitive. It replaces SCSI reservations when metadata updates were done on VMFS volumes. Since NFS has its own locking mechanism and does not use SCSI reservations, ATS is not applicable to NFS.

Feedback

What other features do you think vSphere could offload to the storage array? Are there enhancements you would like to see made to VAAI. Let me know via comments.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

21 Replies to “VAAI Comparison – Block versus NAS”

  1. Another excellent article from Cormac. Just one thing, is there a syntax to run esxtop to show these primitives?

  2. I have a question on write same.

    does vmware call write_Same with UNMAP bit set? what is the difference?

    Also if the storage supports only thin provisioned LUNS, does write same not make sense in those scenarios?

    1. Yes – all primitives are orthogonal. There is no inter-dependence between Write_Same (which does zeroing) and UNMAP (which reclaims dead space). For thin provisioned datastores, there may still be occasions where you wish to zero out all the space in advance, so yes, you can still use Write_Same to zero out the space on thin provisioned datastores.

  3. great article sir , we’ve been searching for an EMC FC VAAI vs Netapp NFS VAAI comparison for a long time and this answered a lot of things for us.

  4. Are all of the NAS -VAAI features listed mandatory for a NAS vendor to state his qualification for VAAI certified or is it a subset of it ? Any Insights.

    1. Hi Sandy, I don’t believe so. I have seen various vendors release some features of VAAI-NAS and not others. I’d don’t know what the official line is, but I would also ask up front which of the primitives does a vendor support.

      1. Thanks – appreciate your response. Two questions:

        1. >>Flat pre-initialized also does sends a Space Reserve NAS VAAI command, plus it does ESX-based zero writing to the VMDK … This means that it is a slow operation, and any writes are sent over the wire – they are not offloaded

        I wonder why this is not offloaded to the array and why would the host write these Zero’s over NFS to the file being reserved?

        2. Also when would one go for Lazy Zeroed (lazyzeroedthick) and when would one opt for Thick Provision Eager Zeroed (eagerzeroedthick) ?

      1. In VAAI NAS Certification Guide
        For File Cloning:

        FullNativeTwoFileZTCV
        FullNativeEagerzeroedthickSameVolume
        FullNativeThinSameVolume
        FullNativeEagerzeroedthickCrossVolume
        FullNativeThinCrossVolume
        NativeVMSV
        NativeVMCV
        NativeZTSV
        NativeLazyZTSV
        NativeLazyEZTSV
        NativeLazyThinSV

Comments are closed.