vCloud Director 5.1 & Storage DRS
Another feature which was introduced in vSphere 5.1 & vCloud Director 5.1 was the interoperability between vCloud Director & Storage DRS. Now vCloud Director can use datastore clusters for the placements of vCloud vApps, and allow Storage DRS do what it does best – choose the best datastore in the datastore cluster for the initial placement of the vApp, and then load balance the capacity and performance of the datastores through the use of Storage vMotion.
However, what about Fast Provisioned vCloud vApps which are based on link clones? Well yes, this is also supported. Storage DRS now understands how to handle linked clones objects which it didn’t do previously.
Shadow VMs
To begin with, let’s talk a little about Fast Provisioned vCloud vApps and shadow VMs. If a fast provisioned vCloud vApp is deployed from the catalog to a different datastore, a shadow VM of the vApp is instantiated from the catalog on the datastore. A shadow VM is an exact copy of the base disk. The fast provisioned vCloud vApp (which is effectively a linked clone) then references the local shadow VM on the same datastore; it does not reference the version of the vApp in the catalog. There is a great KB article here which discusses Fast Provisioned vCloud vApps in more detail.
However, I haven’t seen much documentation in the way of Storage DRS & vCloud Director 5.1 interoperability. Therefore I decided to investigate some of the behaviour on my own environment. My setup had the Fast Provisioned option enabled for my ORG-VDC:
In this example, I imported a VM from vSphere into my catalog on a Storage DRS datastore cluster. I am using this catalog entry to fast provision (fp) vCloud vApps. I next deployed a vApp from my catalog, but changed the Storage profile so that a new, fully automated Storage DRS datastore cluster becomes the destination. I observed the clone virtual machine task running, and the shadow VM folder being created on one of the datastores in the datastore cluster. When the operation completed, I saw the shadow VM and the fp vCloud vApp deployed to that datastore in the datastore cluster:
I made some subsequent deployments of that fp vCloud vApps to this datastore cluster to see if other shadow VMs are instantiated automatically on other datastores. Note that fp vCloud vApps to the same datastore in the datastore cluster will not need a new shadow VM; they simply reference the existing shadow VM as their base disk. My second fp vCloud vApp went to the same datastore. This is expected as there would be some overhead and space consumption to instantiate another shadow VM. The next 4 fp vCloud vApps also went to the same datastore. At that point I decided to go to the Storage DRS Management view and Run Storage DRS Now. At that point a number of Storage vMotion operations executed and my fp vCloud vApps were balanced across all the datastores in the datastore cluster. One assumes that this would have eventually happened automatically since Storage DRS does need a little time to gather enough information to determine correct balancing of the datastore cluster.
Storage DRS Migration Decisions
In the case of fast provisioned vCloud vApps, Storage DRS initial placement recommendations will not recommend an initial placement to a datastore which does not contain the base disk or shadow VM, nor will Storage DRS make a recommendation to migrate fast provisioned vCloud vApps to a datastore which does not contain the base disk or a shadow VM copy of the base disk. Preference is always going to be given to datastores in the datastore cluster which already contain either the base disk or shadow VMs as observed in my testing.
However, if Storage DRS capacity or latency thresholds are close to being exceeded on some datastores in the datastore cluster, new shadow VMs can be instantiated on other datastores in the datastore cluster by Storage DRS. This allows additional fp vCloud vApps to be initially placed or migrated to this datastore. This is also what I observed during testing. When I clicked on Run Storage DRS Now, I noticed new shadow VMs get instantiated on other datastores in the datastore cluster. Now fp vCloud vApps (based on linked clones) can be placed on any datastore in the datastore cluster.
If there is a cross–datastore linked clone configuration (created via APIs for example) and the linked clone vCloud vApp references a base disk on a different datastore, you may find that Storage DRS will not surface recommendations for this vApp. In fact, such configurations should be avoided if you want to use Storage DRS with vCloud Director.
So what about migration decisions? The choice of migrating a VM depends on several factors such as:
- The amount of data being moved
- The amount of space reduction in the source datastore
- The amount of additional space on the destination datastore.
For linked clones, these depend on whether or not the destination datastore has a copy of a base disk or if a shadow VM must be instantiated. The new model in Storage DRS takes the link clone sharing into account when calculating the effects of potential moves.
On initial placement, putting the linked clone on a datastore without the base disk or shadow VM is more costly (uses more space) than placing the clone on a datastore where the base disk or shadow VM resides.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
Excellent article Cormac. Love the super simple but ver insightful diagram.
Kudos has to go to Frank Denneman for the great diagram.
Thank you for this information. We have a Labmanager (vmlogix) environment that uses linked clones heavily. We are looking to replace these environments with vCloud. I was worried about how linked clones would play with it and Storage DRS.
Our current environment runs about 200vms with about 600 templates/links in the tree.
Thank you,
Kevin
Hi Cormac.
Page 9 of the vCD 5.1 Install and Upgrade guide says:
vCenter clusters used with vCloud Director must not enable storage DRS.
Is this not the case? Are VMware now supporting Storage DRS with vCD 5.1?
I think this is a typo Nigel. Let me look into it for you. My understanding is that Storage DRS & vCloud Director are now fully integrated.
Thanks for the swift reply. Looking at the latest version of that document (rather than the one issues at release time), that line has now disappeared.
Good – guess they caught it already then 🙂
A related question: would you ever expect a shadow VM to be storage vMotioned by Storage DRS?
It just happened to us and effectively broke the Shadow by splitting the disks across multiple datastores. This may be because we didn’t have StorageDRS enabled initially (i.e. when the Shadow was created). Fortunately no clones remained using that shadow (which probably would have prevented the movement anyway), so I could delete the shadow and it was recreated cleanly.
I would ensure that the shadown VM does not participate in Storage DRS Nigel. I’m not sure if there is any best practice explicitly stating that, but you would want to keep the shadow VM permanently on the said datastore.
Hi Cormac
I’m using Fast Prov and Thin Prov and am only interested in SDRS as a way to ensure that my datastores do not run out of disk space (My arrays do their own hotspot detection, etc.). So basically, I would like SDRS to kick in when a DS goes beyond say 90% utilisation and start balancing the remaining free space. My attempts so far have been met with failure, e.g. I have a storage cluster with 5 x 1 TB datastores, four of which have ~50% free space, one of which has 0.34% free space and yet no SDRS migrations are occurring. SDRS IO Metrics have been disabled..
Is what I am trying to do possible, or is space utilisation only considered during the initial placement of a vApp?
Thanks!
Hi Mark,
Thin Provisioned VMDKs have some subtleties when used in the context of Storage DRS. Storage DRS adds buffer space to thin disks when determining load balancing options. Perhaps the buffer space added by Storage DRS to the thin disks on the datastore with ~50% free is actually consuming much more space than you think, preventing migrations. I suggest having a read of Frank Denneman’s excellent article on thin disk behaviour in Storage DRS. You can find it here – http://frankdenneman.nl/2012/10/01/avoiding-vmdk-level-over-commitment-while-using-thin-disks-and-storage-drs/. Frank discusses how the buffer sizes can be tuned.
HTH
Cormac
Thanks Cormac, will have a fiddle with that setting and let you know the outcome!
Unfortunately setting PercentIdleMBinSpaceDemand to 0 hasn’t made much difference for me, however I’ve come to realise that SDRS has made a nice little mess of my linked-clones. It has moved shadows all over the place and created cross-datastore linked clones, so this is probably why SDRS isn’t surfacing recommendations when my datastores are running dangerously low on space. This also probably explains why I am seeing “This shadow VM has become duplicated and unusable on the datastore” system alerts against some existing shadow VMs in vCloud Director. After SDRS moves a shadow, it leaves behind only the VMDK’s.
I intend to clean up this mess on the weekend, so my plan once I have a clean environment again is as follows:
-Change SDRS Automation to manual mode to prevent Shadow VMs from automatically participating in SDRS
-Dynamically change the SDRS Automation on a per VM level to Fully Automated, but for only VMs that we want to move around
-Remove the PercentIdleMBinSpaceDemand setting just to see if I need it or not
The reason for me changing SDRS to manual is because shadows are being created on a fairly regular basis and I don’t want to have to keep track of this from week to week!
My hope is that if a DS runs low on space, vCD will clone a new shadow on a different datastore and SDRS can then move and re-link some of the existing clones to the new shadow to balance the space… though to be honest i’m not entirely sure if it will do this