vSphere 5.1 Storage Enhancements – Part 3: vCloud Director

In this post, I want to highlight a number of storage improvements made in vSphere 5.1 that are going to be leveraged by the next release of vCloud Director.


First off, we have the new file sharing scalability enhancements made in VMFS-5, which now allows up to 32 hosts to share a single file. This is covered in detail in part 1 of this vSphere 5.1 storage enhancements series of blog posts, but what this does mean for vCloud Director is that vApps deployed on linked-clones can now have many more hosts sharing the base disk on a VMFS-5.

VAAI NAS Offload

Sphere 5.0 introduced the offloading of linked clones for VMware View to native snapshots on the array via NAS VAAI primitives. You can read more about this here. vSphere 5.1 NAS VAAI enhancements will allow array based snapshots to be used for vCloud Director vApps based on linked clones, in addition to being used for VMware View.

When VMware vCloud Director does a fast provision of a vApp/VM, it will transparently use VAAI NAS to offload the creation of the subsequent linked clones to VAAI supported arrays.

Just like VAAI NAS support for VMware View in vSphere 5.0, this feature will also require a special VAAI NAS plug-in from the storage array vendor.

At the time of writing this article, NetApp already have this feature included in their next VSC release (4.1) which is currently in beta.

If “Fast Provisioning” is used on the Org vDC Storage settings AND the check box “Enable VAAI for fast provisioning” on the overall system Datastore settings is selected, it will trigger the right commands to use a native array-based snapshot for a linked clone instead of a standard redo log based one.

Profile Driven Storage Interoperability with vCloud Director

Storage Profiles are now represented in vCloud Director. Storage Profiles still must be configured from the vSphere layer, but they now surface up into vCloud Director. The storage profiles must first be added to a Provide vDC. For example, you might have Gold, Silver & Bronze storage profiles created. This then allows storage to be allocated and managed on a per ORG vDC. Again, continuing our example, this organization can only use datastores which are tagged as ‘Silver’. This support for Storage Profiles allows a high level of seperation between organizations at the storage level. Below is a snapshot of an ORG vDC with two storage profiles, one for iSCSI storage and one for NFS storage.

Profile Driven Storage with vCloud Director

If the Storage Profile associated with a vApp is changed (this can be done via the properties of a vApp), the vApp is automatically Storage vMotion’ed to a compliant datastore. It is great to see vCloud Director leveraging this excellent vSphere feature.

Storage DRS Interoperability with vCloud Director

One of the major enhancements in vSphere 5.1 is to provide interoperability between Storage DRS and vCloud Director. This essentially means that vCloud Director 5.1 now recognises datastore cluster objects from Storage DRS. Just like Storage Profiles, the configuration of Storage DRS is done at the vSphere layer, but the resulting datastore clusters and their respective configuration surface up into vCloud Director. In order for this interoperability to work, Storage DRS now understands linked clones (which it didn’t do previously). Going forward, vCloud Director can now use Storage DRS for initial placement, space utilization and I/O load balancing of vApps based on linked clones.


Snapshot Management in vCloud DirectorThe last feature introduced in vSphere 5.1 & vCloud Director 5.1 is the ability to take Virtual Machine snapshots from within vCloud Director. Previously one had to take these snapshots at the vSphere layer. As per the screen shot on the left, you can now Create, Remove and Revert a snapshot via the vCloud Director UI.

Although this might be considered a minor improvement, it does alleviate some additional administration which was necessary in previous versions of vSphere/vCloud Director.

I guess the next question then is how do you tell if you have a snapshot on the VM?

By default this information is not displayed on the Virtual Machine view. To show this information, select the option to display the Column headings which is on the right of the screen. Place a tick in the Snapshot column. You will now have a column denoting whether or not there is a snapshot for the Virtual Machine as per the diagram below,

vCloud Director Snapshot Management

It is nice to see these vSphere storage features being leveraged by vCloud Director. It’s especially nice to see some of the interoperability between products and features.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

16 Replies to “vSphere 5.1 Storage Enhancements – Part 3: vCloud Director”

  1. Cormac Hogan, thanks a lot for this post!

    I’m trying to figure out how to offload the vCD 5.1 features
    – “create a new vApp” based on a vCD template and
    – “create snapshot”

    to a hardware storage. This storage does not support NFS (currently iSCSI, later FC/FCoE) so we could not benefit from the NAS offloading enhancement you described.

    According to Chris Colotti (http://www.chriscolotti.us/vmware/gotcha-vcloud-director-clone-wars-part-1-overview/): “vApp templates in vCloud Director are not stored in vSphere as “Template”, they are simply a powered off Virtual Machine.”

    So I would like to use some features of this storage (using vCD as fronend) to clone a vApp or to take a snapshot more efficiently than using native VMware features.

    Any ideas how to accomplish this, e.g. by hooking/creating small layer between vCD and vSphere Storage API?

    Or would you point me to the right person how could help me to figure out how to accomplish this, please ?

    Please don’t hesitate to contact me for more details.


  2. My understanding is that if you have a block storage array that supports VAAI, operations such as cloning will be automatically offloaded anyway. There is nothing you need to do at the vCD or vSphere layer to achieve this. You can verify that VAAI offloads are taking place via esxtop.

    Unfortunately, the snapshot/link clone offload mechanism is only available on VAAI capable NAS storage arrays, and you need a special NAS plugin from the array vendor to achieve this. This feature is not available on block storage arrays.

  3. Cormac, thank you. And I learned a lot about this topic since your last comment.
    So the only options for space efficient clones (100% re-use of existingblocks/VMDK files) is either a VMware VAAI XCOPY enabled NAS (= NFS only) pointing to identical data within the storage allocation map / metadata or waiting for VMware VVOLs.

  4. Hi Cormac
    Thanks for this post. It helped me to understand the concept of VAAI and fast provisioning.
    We have got slight different problem in our VCD 5.1 environemnt, where the datastore clusters have been setup for Storage DRS and fast provsioing has been enabled on those within VCD5.1. In order to provision a cloud VM which is potentially larger than the datastore by spliting its VMDK through SDRS we cannot use Fast Prvovisioning as it does not seem to support split VMDK. Does this mean that our design should create a dedicated datastore cluster for fast provisioning- to incorporate VM bigger then datastore by leveraging SDRS. Or another school of thought is to not enbaling fast provisioning at all for the customers.


    1. Hi Raj,
      I assume when you say deploy a VMDK larger than the datastore, you mean larger than the remaining space on the datastore, right? If it is not, then you need larger datastores. If it is the latter, then yes, Storage DRS via Storage vMotion will move VMs between datastores in the datastore cluster to free up enough space to deploy your new cloud VM.

  5. Hi Cormac
    Its is indeed the later. But our use case is different. Say we have a datastore cluster with 4 datastores each 600G.This datastore cluster is part of Provider VDC. The VDC customer provsions a VM of 300GB and then decides add another 400GB to it. Becase there is no datastore big enough to handle 700G VM it does not allow the customer to add that disk even with SDRS. SDRS can split the VMDK of this Vm across datastores to allow cusotmer to have 700G VM. But this feature is not supported with VCD5.1 Fast Provsioning. We Its is bit confusing and difficult to design storgae sizing here. And if you add the new snapshot feature of VCD5.1 then the things become worse as snapshot reserve the same amount of storage in the datastore as the current size of VM.

    1. Raj, it would appear then that Storage DRS is not doing what it is supposed to in this case. Leave it with me while I make some further enquiries into why VCD is behaving like this.

    1. Thanks for the ping Mike. I’d forgotten to post an update. Indeed, it would appear that vCD is not fully using Storage DRS placement rules. The behaviour is as Raj described earlier; Storage DRS will not move VMs in order to make room for new VMs ‘in the current release’. I know this has been highlighted internally as a concern and something which needs to be addressed, but I can’t comment on when we’ll see this implemented.

      1. Cheers – appreciate that – the important thing is the customer knows it “isnt them” and some mistake they have made in their configuration. Good to avoid rabbit holes…

Comments are closed.