Let’s begin this post with a recap of the Storage vMotion enhancements made in vSphere 5.0. Storage vMotion in vSphere 5.0 enabled the migration of virtual machines with snapshots and also the migration of linked clones. It also introduced a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork a write to both source and destination using mirror mode. This means migrations can be done in a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy changed block tracking (CBT) mechanism in the previous version & means more predictable (and shorter) migration time.
vSphere 5.1 Enhancements
In vSphere 5.1, 4 parallel disk copies per Storage vMotion operation can be performed. In previous versions, we used to copy all VMDKs belonging to the same migrating VM serially. Therefore, in 5.1, if a request to migrate a VM with 6 disks in a single Storage vMotion operation is received, the first four copies are initiated simultaneously and then the next disk copy is kicked off as soon as one of other copies finishes.
To reduce the performance impact on other virtual machines sharing the datastores during a Storage vMotion, parallel disk copies only apply when there are distinct datastores involved. This means that if a virtual machine has two VMDK files on datastores A and B respectively, parallel disk copies will only take place if the destination datastores are C and D. A fan out disk copy, in other words copying two VMDK files on datastore A to datastores B and C, will not have parallel disk copies. The common use case of parallel disk copies is the migration of a virtual machine configured with an anti-affinity rule inside a datastore cluster.
If you are moving disks between different datastores in a single Storage vMotion, then, this should speed things up significantly
There is still a limit of 2 concurrent Storage vMotions per host, 8 per datastore. It’s important to note that the limit of 8 concurrent Storage vMotion operations doesn’t directly relate to the parallel disk copy change. For example, if you issue a single Storage vMotion operation (leaving room for another 7 Storage vMotion operations on the target datastores), that single Storage vMotion operation could be moving all 60 of a VM’s disks.
Storage vMotion of VM’s with Snapshots
This is slightly off topic, but may be of interest. It should be noted that in ESXi 5.0, the snapshot placement mechanism is changed with respect to previous versions. The virtual machine snapshot disks now follow the parent disk. If a VM has two virtual disks, one (diskA) residing in datastoreA and another (diskB) in datastoreB, then virtual machine snapshot creation will place the virtual machine snapshot disks of diskA on datastoreA and the virtual machine snapshot disks for diskB on datastoreB. Previously all virtual machine snapshot disk files used to reside in VM’s home folder.
Also, a possible concern is when you do a Storage vMotion operation of a VM with snapshots and you are using the WorkingDir parameter. I wrote a long explanation about this on the vSphere Storage blog here, so have a read about the details there. The bottom line is that VM snapshots are all moved to the VM’s home folder when a Storage vMotion of the VM is initiated, even if the WorkingDir parameter has them set elsewhere. This behaviour is still the same for 5.1, so take care.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan