Site icon CormacHogan.com

Migrations and Virtual Volumes – Deep Dive

VVolsRecently I published an article on Virtual Volumes (VVols) where I touched on a comparison between how migrations typically worked with VAAI and how they now work with VVols. In the meantime, I managed to have some really interesting discussions with some of our VVol leads, and I thought it worth sharing here as I haven’t seen this level of detail anywhere else. This is rather a long discussion, as there are a lot of different permutations of migrations that can take place. There are also different states that the virtual machine could be in. We’re solely focused on VVols here, so although different scenarios are offered up, I highlight what scenario we are actually considering.

Caution – this is a very long blog post, and while it could have been separated out into multiple different posts, I eventually decided it best to keep it all in one place. Grab a coffee – you have been warned!

Virtual machine states

Let’s begin by looking at the different states that a virtual machine can have when it comes to migration. There are  4 cases to consider altogether. These are:

As you will see, there are some different behaviors to take into account when the VM has snapshots, and when the VM does not have snapshots.

Migration options

There are also a number of different migration scenarios which could be considered:

In this post, I’m only going to be looking one particular use case, the first one; that is “Migrating a VVol based VM between storage containers on the same array”.

This can occur in two ways. The first is when a customer surfaces up multiple storage containers, each behind a different VVol datastore. Each storage container may have a different class of storage, for example one pool of storage may offer deduplication while the other pool of storage does replication or flash (as an example). In this way, a customer could use the vSphere client to migrate VMs just like they do it with moving VMs between traditional datastores. Here is what this might look like, where there are multiple classes (A,B, and C) which are mapped to their own storage container which is in turn mapped to a  VVol datastores in vSphere:

It can also happen when a customer changes a VM Storage Policy, and the current storage container no longer satisfies the requirements in the policy. In the event that a VM wishes to consume a new data service or new class of storage that isn’t available on the source storage container/VVol datastore, you can migrate to another that does offer what you are looking for.

There is of course another way for a VM to get new capabilities without the need for a Storage vMotion operation. Let’s say that a customer surfaces up a single VVol datastore, behind which there is a single storage container, but this container has multiple storage classes/capabilities associated with it on the array. The customer deploys a VVol with a particular VM Storage Policy. This places the VVol on the storage container to meet the requirements. The customer then changes the VM Storage Policy associated with a VVol, and the current storage class no longer satisfies the requirements of the VVol. However the storage container has other classes which can satisfy the policy requirement. There may be no need to migrate the VVol as the same storage container can satisfy the requirement through a different class of storage, and the array may be capable of moving the VM automatically to the appropriate pool. Here is what this might look like, where there are multiple pools of storage (A,B, and C) with different capabilities (dedupe, replication, flash) mapped to a single storage container which is in turn mapped to a single VVol datastore in vSphere:

To recap, a storage container is mapped 1:1 to a VVol datastore in vCenter and on whatever ESXi hosts the vSphere admin decides to present the storage container, and since storage containers can present a range of capabilities, changing the storage policy associated with a VM may or may not necessitate a move to a different datastore (storage container at the backend).  If the same storage container can satisfy the new policy, the array can quietly make whatever adjustments are needed in the background.  On the other hand it’s possible the newly assigned policy may not be satisfiable in the current storage container and a storage migration to a compatible datastore may be needed

Offload mechanisms

And when we consider VAAI and VVols, along with block storage and NAS storage, there are in essence 3 “classes” of offloads. These are:

Just to elaborate on “Host orchestrated hardware offload”, if the storage array does not support the full VASA primitive for VVols, but does support the VAAI XCOPY primitive for example, the ESXi host would instruct it to use that offload mechanism for the data transfer.  If XCOPY wasn’t supported, or not supported between the two storage containers, there is no offloading so what we effectively end up performing is a full copy of the VM, followed by a deletion of the original VM. And of course to use XCOPY, the array needs to support the primitive for it to work.

It should also be noted that the datamover supports VAAI offload using XCOPY for migrations from VMFS *TO* VVols but not *FROM* VVols to VMFS. This may change in future, but this is the behaviour is vSphere 6.0.

The last point, space efficiency optimization, is not really an offload but it is an important optimization. The VAAI-NAS primitive “Full File Clone” can only be used on powered off virtual machines. When performing a Storage vMotion of a powered on VM on file based storage, the ESXi host cannot utilize any of the VAAI primitives and needs to fall back to the data mover for this operation. With this method the complete VMDK needs to be processed, scanning it for used blocks. The used blocks were then migrated – not very efficient. Now, the new “space efficiency optimization” provides the ability to determine to track used/changed blocks via VASA bitmap APIs.

These are steps that a migration operation goes through with VVols:

  1. Attempt VASA APIs offloads first
  2. If that fails, (perhaps the array does not support VASA APIs), the operation falls back to host orchestrated clone, and we attempt to use hardware (VAAI) data mover
  3. If hardware offloads (VAAI primitives) are available, these are used
  4. If hardware offloads (VAAI primitives) are unavailable (perhaps the array does not support VAAI or VAAI-NAS plugin isn’t available), we fall back to using the software data mover/host based copy process

Note that VVol -> VVol migrations using the cloneVirtualVolume VASA API will be limited to VVol datastores managed by the same VASA Provider (VP).  There is no offloading a clone or migration operation between two storage containers managed by different VPs (even if they’re the same vendor, model, etc.).  If migration using the cloneVirtualVolume VASA API is not possible, the fallback is to use the datamover with VAAI acceleration or complete (albeit space efficient) host-based copy as described above.

Let’s take each of the different cases, and then see which class is relevant.

1. Powered On Storage vMotion without snapshots

For a powered on VM without snapshots, the Storage vMotion driver coordinates the migration.  The  driver will use the data mover to move sections of the current running  virtual machine.  The data mover will employ “host orchestrated hardware offloads” when it can. Lets take each of the different storage types in turn.

If the VM is a block VVol, the following operations occur:

If the VM is a NAS VVol, the following operations occur:

2. Powered On Storage vMotion with snapshots

For a powered on VM with snapshots, the migration of the snapshots is done first, then the Storage vMotion driver will use the data mover to move the running VM.

If the VM is a block VVol, the following operations occur:

If the VM is a VVol on NAS Storage, the following operations occur:

3. Powered off cold migration without snapshots

For a powered off VM, the Storage vMotion driver is not in the picture.  So, effectively a cold migration of a powered off VM is a logical move (clone the VM and then delete the source).

If the VM is a block VVol, the following operations occur:

If the VM is a NAS VVol, then it behaves the same as a block VVol and the following operations occur:

4. Powered off cold migration with snapshots

This is pretty much the same general idea as previously mentioned, but now we look at migrating VMs that also have  snapshots.

If the VM is a block VVol, the following operations occur:

If the VM is a NAS VVol, then it behaves the same as a block VVol and the following operations occur:

Conclusion

As you can see, depending on the state of the VM (powered on, powered off, with snapshots without snapshots) and depending on whether the arrays supports VVols, and finally depending on whether the array is block or NAS, a number of different things may happen. There are optimizations for some situations, and not for others. Note that the system automatically uses the most efficient mechanism(s) available in all combinations.

Exit mobile version