VSAN Part 10 – Changing VM Storage Policy on-the-fly
This is quite a unique aspect of VSAN, and plays directly into what I believe to be one of the crucial factors of software defined storage. Its probably easier to use an example to explain the concept of how being able to change storage policies on the fly is such a cool feature.
Let’s take a scenario where an administrator has deployed a VM with the default VM storage policy, which is that the VM Storage objects should have no disk striping and should tolerate one failure.The layout of the VM could look something like this:
The admin then notices that the virtual machine deployed on their VSAN is getting a 90% read cache hit rate. This implies that 10% of reads need to be serviced from HDD. At peak time, this VM is doing 2000 read operations per second. Therefore, there are 200 reads that need to be serviced from HDD (the 10% of reads which are cache misses). The specifications on the HDDs imply that each disk can do 150 iops, meaning that a single disk cannot service these additional 200 iops. To meet the I/O requirements of the VM implies that a stripe width of two disks should be implemented.
On VSAN, the administrator has two options to address this. They can simply edits the VM Storage Policy created for the VM and add a stripe width requirement to the policy. Or indeed, the customer could create a brand new policy that is identical to the previous policy but has an additional capability for stripe width. Once the new policy is in place, the administrator simply synchronizes the new/updated policy with the virtual machine. VSAN takes care of changing the underlying component changes required to meet the new policy, while the virtual machine is still running. It does this by building the new storage objects with the additional components (in this case RAID-0) which the original storage objects are still in place.
Compare this to the tasks one would have to do on a traditional storage array to achieve this. It would involve, at the very least, the migration of VMs from the original datastore, the decommissioning of said LUN/Volume, the creation of a new LUN with the new storage requirements (different RAID level) and possibly the reformatting of the LUN with VMFS in the case of block storage. Finally, you have to migrate your VMs back to the new datastore. This feature of VSAN is very cool indeed. Once the new storage objects have been created and synchronized, the older storage objects can be removed.
What hasn’t been shown here are the additional witness components that could be created with such a change. Remember that in order for a VM to continue to access all its components, greater than 50% of the components of that objects must still be available in the cluster. Therefore changes to the VM Storage Policy could result in additional witness components being created.
You can actually see the configuration changes taking place in the vSphere UI during this process from this:
To this (now with added stripe width requirement):
For more information on VSAN objects and components, please refer to this post here. All VSAN blog posts can be found here.
4 Replies to “VSAN Part 10 – Changing VM Storage Policy on-the-fly”
Comments are closed.