Regular readers will know that I normally blog about the technical aspects of storage, as opposed to doing opinion pieces. However there have been a number of articles published recently questioning the value of VMware’s Virtual Volumes, commonly referred to as VVols. In general, the pieces I have read ask whether or not VVols (or to be more accurate, per-VM granularity feature of VVols) adds value when NFS is already doing per-VM granularity in the form of files. The point that was missed in these pieces is that VVols is so much more than per-VM granularity. I’ve just come back from some great VMUG events in Frankfurt, Germany and Warsaw, Poland where I presented on the value of VVols to our users. I therefore thought it opportune to post about the other benefits of virtual volumes.
Before I begin, I would like to give a shout out to my good friend over at Veeam, Luca De’Lorca. Luca already composed an excellent response to one such piece where he highlighted the benefits of SPBM, Storage Policy Based Management. If I can paraphrase Luca’s response, having the storage surfacing up its capabilities to vCenter, and then being able to compose policies based on these capabilities, enables you, our customers, to do successful initial placement of a VM’s storage each and every time. It also allows you, at a glance, to check whether the underlying storage is still meeting the requirements of the VM for the life-cycle of that VM. Maybe there was a failure at the storage layer, or perhaps the VM was migrated to a different datastore during some maintenance activity. Any number of things can impact the underlying storage of the VM. Historically, to be able to verify that a VM was on its correct storage, customers had to track datastore capabilities through various means. For example, we had customers who maintained a spreadsheet of datastores, capabilities and virtual machines. Similarly, we had customers creating really long datastore names to try to reflect a datastores capabilities. With SPBM (and of course VASA – storage awareness APIs) surfacing up these capabilities, and allowing policies associated with VMs to be pushed down to the storage layer, we make this whole management of storage so much easier.
Making storage management simpler for the vSphere Admin
I actually left a comment on Luca’s blog to try to highlight the fact that it is not just per-VM granularity and policies that gives VVols value. There are other aspects as well. With VVols, we introduce two new concepts; the Storage Container and the Protocol Endpoint, as shown in the previous diagram. Storage Container is easy to explain; you can think of it as an aggregate or pool of storage on the array with an associated set of capabilities. For example, these capabilities might be thin provisioning or/and deduplication or/and encryption or/and replication, etc, etc. The list goes on, but will be very much array dependent. This container or containers is surfaced in vSphere as a single VVol datastore. A vSphere admin now has the complete view of the array’s storage, which is now at his/her disposal. There is no longer a need to reach out the storage admin to carve up and present new LUNs or volumes when storage is required for new virtual machines.
Reducing storage complexity
The Protocol Endpoint (PE) is another feature that is aimed towards simplicity. Consider the traditional LUN or Volume, which is both an access point as well as the storage container. The objective of the PE is to decouple the access point from the storage container. What this means is that we can now address all the storage on the array through one or more PEs. This means that the storage presentation workflows that we had to do in the past for storage, such as initiator groups, ensuring LUN IDs matched across all hosts, making sure the LUN was presented correctly for ESXi hosts, etc, is now also greatly reduced.
On the vSphere side, making sure that the multipathing configuration across all LUNs and across all hosts is also simplified, as you now only need to concern yourself with the PE device. Note that this will be true for NFS 4.1 as well, as soon as this is supported with VVols. Right now we only support NFS v3. NFS v4.1, which was supported with the 6.0 release, supports multipathing.
I also wanted to make a point about scale. Currently there is a limit of 256 LUNs per ESXi host. With the introduction of VVols and PEs, we can now have 256 PE devices from multiple different arrays presented to the same ESXi host. What this means is that we can now have many 1,000s upon 1,000s of VVols (array dependent of course) presented to an ESXi host. VVols allows us to scale to a size that was not achievable previously.
I hope this gives you an idea about the value of VVols. It is not only about per-VM granularity, which is an excellent benefit in its own right. Consider array based snapshot on a per VM basis and array based replication on a per VM basis when we support it. VVols is also about policy based management for storage which will ensure that storage is provisioned correctly for VMs. It is also about simplifying storage management for vSphere administrators and scaling storage for vSphere to a factor never before possible.
Maybe VVols isn’t for every array vendor – they have their priorities too. But for us at VMware, all of these factors form some of the main goals of our software defined storage vision and applies to VSAN, VVols, SPBM, VASA and VAIO.