VMworld 2014 – Storage News Part 2

VVolsI guess the next big tech preview at this year’s VMworld was around Virtual Volumes. Yes, we’ve done this before, but this year there were so many vendors showing demos of their VVol implementation, and so many presentations/sessions on the topic that I believe folks are beginning to realize that we are very close indeed to finally having this feature ready. It’s hard to believe that this was first discussed at VMworld 2011, and I alluded to this when I presented a VVol session that I co-delivered with the folks from Nimble Storage at this year’s VMworld.

This topic is going to be huge, and it is going to change the way storage is designed for virtualization environments. And I think almost every storage partner I spoke to at the show is on-board. In fact, “The Register” asked the question last month if there was ANY storage vendor that wasn’t doing a VVol implementation?

I’m not going to get into any technical details of Virtual Volumes in this post – there will be lots of time for this later. Instead, I’m going to share how we believe VVols will now enable Software Defined Storage (SDS) for SAN & NAS arrays, similar to how we have achieved this already with VSAN.

The first thing to explain is that with VVols, we are no longer dealing with storage at the LUN or Volume level. Instead, the granularity is shifting to the virtual machine disk (VMDK), making VMDKs first class citizens in the storage world. Now when you do Storage Quality of Service (QoS), array based snapshots, or even array based replication, we move from working at the LUN or Volume level to working at the VMDK level – the virtual machine disk level.

Considering that we are no longer working at the LUN or Volume level, and moving to working at the VMDK level, does that mean we now have to carve individual VMDK storage objects on the array? No, absolutely not. As part of the Software-Defined paradigm, VVols is introducing the abstracting and pooling of your storage. What will now be presented to your ESXi hosts is what we are terming a “storage container”, but on the array this can be considered an aggregate or group of disks, with some VVol capable extensions. When the storage container is made visible to the ESXi hosts, a vSphere admin simply creates a VVol datastore from it, in much the same workflow as you would make a VMFS datastore out of a LUN. And that is it – array storage is now visible to the host, although in an abstracted and pooled manner.

So how do I provision VVols for my VM storage? Now this is where we align with how VSAN provisions storage, but this time we are doing it on SAN and NAS arrays. And it is all down to vSphere APIs for Storage Awareness (VASA) and the Storage Policy Based Management mechanism (SPBM). When you have your VASA provider registered, allowing communication between vCenter and the storage array, all of the capabilities/data services on the array are made visible. This could be anything from replication, encryption, deduplication, compression, QoS, performance, availability, etc, etc. Each storage array vendor will have their own unique set of data services. Using SPBM, we can now create policies in vCenter server which contains the requirements for your VM, or more importantly the requirements of the application running in VM. Do you want a particular QoS for the application? Do you need the VM to be protected/replicated to another storage array? Do you want to be able to take snapshots of the VMDK? All of this information will be added to the policy. Then, at virtual machine deployment time, simply select the policy that is appropriate for your VM/VMDK/application. And the thing is, you can put policies on individual VMDKs. So perhaps you need QoS for the data disk but not for the boot disk. In that case, you can have two separate policies. When the policy is selected, you will then be presented with storage that is either compliant (can meet the policy settings) or non-compliant cannot meet the policy settings. If no compliant storage is available (due to maintenance or perhaps the requirements are too high to be met by any array), you then decide if you want to deploy the VM on non-compliant storage, keeping in mind that whatever data services you selected may not be available to the VM/application.

When the VM creation wizard finishes, a request to instantiate a VVol on behalf of the VM is pushed down to the array. A VVol for the VM Home Namespace called a VVol-config and the VMDK are now automatically instantiated on the Storage Container and when complete your VM is ready for use. When the VM is powered on, a third VVol for the VM swap is instantiated. And of course, snapshot operations will instantiate a fourth VVol. For those of you who have been reading up on VSAN, this should all begin to look very similar.

The other cool thing is that if the application requirements change over time, you simply modify the policy with the additional requirements. This can be done on-the-fly.

Benefits

VVols fixes one of the most common storage issues we face today, namely making sure that virtual machines reside on their appropriate storage. Right now, I know of vSphere administrators who are tracking LUN/Volumes, the data services associated with those LUNs/Volumes, and then the list of VMs that should reside on those LUNs/Volumes – it is a management nightmare. A single Storage vMotion, and all that diligence goes to waste – your VM is now running on storage that doesn’t meet its requirements. VVols addresses this; no more spreadsheets, you simply use the policies to ensure VMs are running on the correct storage. If they are not, your VM will be shown as non-compliant to their storage policy, and remedial action can be taken.

It also fixes another major pain point – QoS for virtual machine storage. Historically, when VMs were deployed on a LUN or Volume, all of the VMs shared the performance of the LUN/Volume. It was very difficult to provide a Service Level Agreement (SLA), as new VMs deployed on the same storage could impact all the other VMs (noisy neighbour problem). Now  with VVols, we no longer have this issue since VMs have their own unique storage object. Therefore we can now start to do SLAs on the VM storage.

VVols also addresses the scalability issues we currently have with storage. Right now, we have a limit of 256 LUNs per host. VVols removes this restriction, allowing many thousands of VVols to be instantiated per storage container.

Software Defined Storage – Abstracted, Pooled and Automated

Now consider this. I might now have VSAN, NAS & SAN arrays that are VVol capable, and maybe even cloud storage from vCloud Air visible to my vCenter server (even though they will be abstracted and pooled). How do I figure out which storage my VM should be deployed to? Well, with Software Defined Storage, you will have all the capabilities from the three storage types visible in vCenter. You build your policy, picking the capabilities for the application running in the VM, and submit it. VASA/SPBM will examine the policy settings, and will automatically provision the VM on the appropriate storage to meet your requirements. This is in essence what software defined storage is all about.

Exciting times ahead in the storage space for sure!

6 Replies to “VMworld 2014 – Storage News Part 2”

  1. I need to ask you a question about iscsi, which is eluding me. I want add the bandwidth of two nics going straight my SAN, different IPs and networks, but both reach the same target. I achieved multipathing, as I can see “paths=2”. But only one nic is being used at the time, the second one is dormant. Is there any way to really aggregate the bandwidth of both interfaces or this is a dream?

Comments are closed.