Site icon CormacHogan.com

vSphere 6.0 Storage Features Part 5: Virtual Volumes

VVolsI pushed this post out a bit as I know that there is a huge amount of information out there around virtual volumes already. This must be one of the most anticipated storage features of all time, with the vast majority of our partners ready to deliver VVol-Ready storage arrays once vSphere 6.0 becomes generally available. We’ve been talking about VVols for some time now. Actually, even I have been talking about it for some time – look at this tech preview that I did way back in 2012 – I mean, it even includes a video! Things have changed a bit since that tech preview was captured, so let’s see what Virtual Volumes 2015 has in store.

Much kudos to my good friend Paudie who did a lot of this research.

 Virtual Volumes Terminology

Let’s begin with a discussion about the new terminology that VVols introduces:

Configuration Steps for Virtual Volumes

In a nutshell, the following are the configuration steps required on vSphere to use Virtual Volumes:

  1. Add the VASA Provider to vCenter
  2. Discover or mount the PEs
  3. Create the Virtual Volume datastore
  4. Create VM Storage Policies
  5. Deploy VMs with a VM Storage Policy

Protocol Endpoint – detailed

In today’s LUN-Datastore vSphere environments, a datastore serves two distinct purposes – It serves as the access point for ESXi to send I/O to, and it also serves as storage container to store many virtual machine files (e.g. VMDKs). If we separate out the concept of the access point from the storage aspect, we can run with a fewer number of access points, each of which could refer to a number of storage entities. This is the purpose of the Protocol Endpoint. We can very many virtual volumes with just a few access points.

Protocol Endpoints are LUNs when the storage is block storage. Protocol Endpoints are mount-points when the storage is NAS/NFS. A PE is discovered/mounted in the same way as block and NAS datastores are discovered/mounted today.

Virtual Volumes are said to be bound to a Protocol Endpoint. When it comes to multipathing, an administrator only needs to set up multipathing and load balancing for the PE, and all the Virtual Volumes bound to that PE inherit the same multipathing and load balancing characteristics.

Support Protocols

VVols is supported with the following protocols:

It is not supported with NFS v4.1, which is newly introduced in vSphere 6.0.

 Storage Containers – detailed

The first thing to point out is that a storage container is not a LUN. It is a logical storage construct for the grouping of virtual volumes and is set up on the storage array by the storage administrator. In some respects, it can be thought of as a pool of storage on the array. The capacity of the container is based on physical storage capacity and there must be a minimum of at least one storage container per array. The maximum number of storage containers depends on the array. Multiple different storage containers can be used to logically partition or isolate VMs with diverse storage needs and requirements. A single storage container can be simultaneously accessed via multiple Protocol Endpoints.

When presented to ESXi hosts, the storage container appears as a VVol datastore.

Virtual Volumes – detailed

Those of you who are familiar with VSAN will already be familiar with virtual volumes in some respects. When we talk about virtual volumes or VVols, what we are basically talking about is the encapsulation of virtual machine files on hardware and exported as objects. Virtual Volumes are created when you perform Virtual Machine operation such as “Create a Virtual Machine”, “Power on a Virtual Machine” or “Clone or Snapshot a VM”. vCenter associates one or more Virtual Volumes to a Virtual Machine. A Virtual Machine, when deployed to a VVol datastore, can be though of as being composed of a number of Virtual Volumes.

 Types of Virtual Volumes

Just like on VSAN, where each VM is made up of a set of objects, VMs deployed on a VVol datastore will be made up of a number of different VVols. The following are some typical VVols:

A note about queue depths

There has been some concern raised regarding queue depths and Virtual Volumes.  Traditional LUNs and volumes typically do not have very large queue depths, so if there are a lot of VVols bound to a PE, doesn’t this impact performance? This is addressed in a number of ways. First, the array vendors are now free to choose any number of PEs to bind their VVols to (i.e. they have full control over the number of PEs deployed, which could be very many). Secondly,  we are allowing for greater queue depth for PE LUNs to accommodate a possibly greater I/O density. However, considering that we  already provide a choice regarding the number of PEs per storage container, and storage container size, this increased queue depth may not be relevant in many situations.

The role of policies

One thing to keep in mind is that SPBM, Storage Policy Based Management, plays a major role in virtual machine deployment. Once again, just like VSAN, VM deployment is policy driven. Capabilities are surfaced up to vSphere, the administrator builds policies with the capabilities, the policy is chosen when the VM is being created/deployed and the VM’s VVols are created in such a way so as to match the policy requirements.

These capabilities with vary from storage array vendor to storage array vendor, but think of capabilities like dedupe, compression, encryption, flash acceleration, etc, etc. There is no hard and fast list of VVol capabilities – it all depends on the array. If the array support the capability, then VVol can consume them. The VASA Provider, reference earlier is how these capabilities are exposed to vCenter, and this is all under the control of the array vendor.

Now these capabilities can be chosen on a per VM basis, and the resulting VVol will be placed on the appropriate storage container that can offer these capabilities. the policy can then be checked for compliance throughout the life-cycle of the VM, ensuring that the VM has the required storage feature set. When you hear VMware talking about Software Defined Storage, this is at its very core.

A final point is about the policies. Each policy can have multiple rule-sets, a different rule-set from a different vendor. If the rule-set related to one array cannot satisfy the requirements in the policy, then perhaps the rule-set from another vendor can. Of course, you will need multiple storage containers with different capabilities (or multiple VVol capable arrays from different vendors)  for this to work, but hopefully you can see how powerful this feature is.

 What about VAAI?

This was in fact one of my early questions – what does VVols means for VAAI? Let’s look at each of the VAAI primitives and discuss any differences. Remember that just like VAAI, the individual array vendors need to support the primitive for it to work with VVols.

ATS (Atomic Set and Test)

XCOPY (Cloning, Linked Clones)

UNMAP

Thin Provisoning Out of Space (OOS)

A few additional notes about VAAI and VVols. A common question is whether or not VVols and LUNs/datastores from arrays that use VAAI can be presented/co-exist on the same ESXi host. The answer is absolutely, yes. In this case, should there be a request to clone a VMDK from datastore to VVol, VAAI will be used to clone from VAAI enabled datastore to a Virtual Volume

The other interesting point is around VAAI-NAS, which had a different set of primitives when compared to VAAI on block storage. VVols now levels the playing field. For example:

Conclusion

We’ve waited a long time for this feature. This is a game changer in the storage space in my humble opinion.

Exit mobile version