Virtual Volumes – A closer look at Storage Containers

VVolsThere are a couple of key concepts to understanding Virtual Volumes (or VVols for short). VVols is one of the key new storage features in vSphere 6.0. You can get an overview of VVols from this post. The first key concept is VASA – vSphere APIs for Storage Awareness. I wrote about the initial release of VASA way back in the vSphere 5.0 launch. VASA has changed significantly to support VVols, with the introduction of version 2.0 in vSphere 6.0, but that is a topic for another day. Another key feature is the concept of a Protocol Endpoint, a logical I/O proxy presented to a host to communicate with Virtual Volumes. My good pal Duncan writes about some considerations with PEs and queue depths here. This again is a topic for a deeper conversation, but not today. Today, I want to talk about a third major concept, a Storage Container.

 In a previous post on migrations and VVols, I posted the following diagrams to highlight some possible configurations of storage containers.

Multi VVol Single VVol

The first thing to highlight is that a Storage Container is mapped 1:1 to a VVol datastore. This VVol datastore is then presented to whichever hosts the vSphere admin decides to present the Storage Container to. It does not necessarily have to map to all hosts.

From some comments I received on previous posts, and from a lot of conversations that we have been having internally, these diagrams might lead to the perception that there is a need to stick to a single, uniform set of capabilities for a storage container. For example, this storage container is built on SAS drives and can do snapshots. Or this storage container is built on flash can can do dedup. This is not the case. There is NO reason to stick to a single, uniform set of capabilities for a VVol storage container. What needs to be explained in more detail in the above diagram is the concept of pools, and what goes to make up a pool. A pool, for the most part, can be thought of as an aggregate or grouping of physical storage with a particular RAID level applied. How capabilities, such as snapshot, replication, etc., are applied are going to vary from array vendor to array vendor.

Disclaimer: We are still waiting to see how our storage array partners do their implementations. They have been provided with certain guidelines on what we expect from a VVol implementation, but obviously the implementations will vary from storage array vendor to storage array vendor.

Having stated the above, I’m hoping that this diagram can provide some further clarity:

VVOL detailed (D)There are a few things that are important to highlight here.

A storage container may be created for administrative reasons, and does not need to be based on physical underlying storage or capabilities.  Using a storage container, a storage admin can now carve out a certain amount of storage for, say, “HR” or “Finance” and include a mix of backing storage and capabilities that they need. For example, a department’s storage requirements may include a certain amount of flash, a certain amount of high-performance spinning disk, the ability to take snapshots, etc. This is then presented to the vSphere admin as a VVol datastore. The admin can then determine the specific storage that should be used by VMs by assigning the appropriate VM storage policy. This policy could reflect a choice of backing disk and capabilities.

Note that this approach allows a vSphere admin  to create  a VM on low tier storage during development without any data services/capabilities, and then by simply changing the storage policy to include a mix of flash, disk drives, and some capabilities, the VM can be moved into production without triggering a Storage vMotion operation. In other words, the VM stays on the same VVol datastore, but now has a new set of capabilities, e.g. it can have snapshots.

There is nothing to stop you creating storage containers based on capabilities, but you may not want to surface all capabilities on all storage containers on the array. As one colleague mentioned, you might want the storage for the “Legal” department to be encrypted, but you might not want to offer this capability to “Marketing”.  So now you may have two storage containers, once that includes encryption and one that does not. Another scenario might be limiting certain storage containers from using deduplication which would make sense if the content is known to be not repetitive enough to be worth dedup’ing.

The point I’m trying to make here is that storage containers don’t have to be organized on the basis of backing storage or capabilities, which is how we’ve traditionally done things. A single storage container could represent the whole array, including all the storage and all of the capabilities/data services. VM storage policies are used to then place your VM on appropriate storage under the covers. Or a storage container could be used to provide administrative boundaries for different consumers of the storage. Or you could use storage containers to control access to different capabilities. So there is still a considerable amount of design considerations for a storage admin to decide up front.

Now we’ll have to wait and see what the best practices are from each of our storage array partners. Will they recommend storage container based on simplicity, administrative purposes, capabilities or something else? Fortunately, we won’t have long more to wait to find out.

12 Replies to “Virtual Volumes – A closer look at Storage Containers”

  1. Cormac,

    It looks like the link to Duncan’s site is missing a “:” in the URL. This is what the link shows at least in Firefox:
    “http//www.yellow-bricks.com/2015/02/23/virtual-volumes-and-queueing/”

    Just a heads up.

    Thanks,
    Dale

    1. None that I am aware of Sylvain. The configuration maximums states that you can have up to 64,000 VVols bound to a host, but I suspect that you are advised to talk to the storage array vendor on the number of VMs, as there are things like PE queue depth which need to be considered.

      1. Ok. So, if there is only one queue by PE, it will be, certainly, the same “problem” that LUN. I’ll ask to Dell Storage Team when the upgrading to ESXi 6 will be in progress.

        Thanks.
        Sylvain

  2. Hi Cormac,

    great article btw!
    you said you could mix and match different pools with different storage capabilities into one giant storage container.
    Does the vSphere admin see just the aggregate capacity of all pools together?
    If so, how do you manage storage space from these different pools? I mean if VM A has a storage policy which matches Pool 1 and this Pool is running
    out of space, will the admin be informed by an alert? Or does vCenter just report the capacity of the whole storage container and tells you everything is good?

    thanks
    Ronny

  3. Great article, as always. I always love reading your stuff and incorporate it in my classes regularly.

    I do have a question though. On block storage devices the protocol endpoint presents as a LUN. Does this mean that there is a shared LUN queue for all the VVols that use the PE? Is there any recommendation on the number of VMs per Protocol Endpoint? Should the LUN queue size be modified?

    Thanks!

    Rick Crisci

    1. Hi Rick,

      We have published some maximums in the 6.0 configuration maximum for VVols. However most of this will be storage array dependent, and will vary from partner to partner. The best advice is to speak to the array vendor, ask about PE best practices (how many PEs per Storage Container, which multipath policy, how many physical adapters should it be presented on, etc). But yes, there is a shared queue for all devices behind a PE, and this is the reason why devices identified as PEs have a higher queue depth than plain LUNs.

Comments are closed.