Why would we want or even need to do something like FCDs to do per VMDK granular operations? Well, there are some concrete examples. One example is App Volumes. App Volumes has the ability to separate the operating system from the application and user personas. These exist as stand-alone storage objects, and have facilitated the creation of what can be called Just-In-Time desktops in for products like Horizon, Citrix XenApp and XenDesktop, and RDSH virtual environments. One of the drawbacks has been how to back up these App Volumes. To address this gap, a fling was created which creates a “backup VM” to which the AppStacks and Writeable Volumes can be attached. This is hardly ideal. Similarly, in vSphere Integrated OpenStack (VIO), to enable Cinder (OpenStack Block Storage) to consume VMDKs, a shadow VM had to be inserted into the environment to allow the VMDKs to be instantiated and then consumed by Cinder. For each Cinder volume a shadow VM is created with a VMDK attached. Therefore there will be a lot of powered off VMs just to provide Cinder volumes. I personally had customers tell me how painful this approach is. Again, hardly ideal. Lastly, another use case that we see is when creating persistent volumes for cloud native or next-gen applications. Project Hatchway is an umbrella project which created both a Docker Plugin and a Kubernetes driver (called the vSphere Cloud Provider) to allow us to create persistent volumes for containers on vSphere containers. The persistent volume created for the application is a VMDK that is attached either to the Docker VM or the K8s node/worker VM. Now if we want to do operations such as snapshot and backup that container, we again do not want to be working at the VM granularity, as those VMs may have many other applications and many other persistent volumes associated with them. We need to be able to work at the VMDK granularity. Thus we also need FCDs here.
The story so far
FCD information is held in a catalog on the vCenter database. This stores the UUID and friendly name of the FCD. This UUID is globally unique and the primary identifier for the FCD. The UUID remains valid even if its FCD is relocated or snapshot’ed.
APIs for FCDs first appeared in vSphere 6.5. My good pal William Lam did a good write-up of the available APIs here. However in that release, there was a limitation for backup of First Class Disk (FCD). The The Virtual Disk Development Kit (VDDK) did not support backing up of a detached FCD. The workaround is to attach the FCD to a dummy VM (which doesn’t even have to have a Guest OS installed). More detail can be found here.
In vSphere 6.7, this limitation was relaxed somewhat in so far as we could support backups for detached first-class disks (FCD). This still had some limitations however. One requirement was that on restoring a backup of a FCD, it had to be restored with the same UUID and onto the same datastore. Of course, this has its own challenges for any restore (other than in place restore). Another limitation is that there is no API to return the blocks which have changed since the last backup. Thus, we are prevented from doing incremental backups. Further detail on backing up FCDs can be found here.
In vSphere 6.7U1, we introduced some limited support for FCDs on vSAN. This limited support means that there are still interoperability issues with the health service and capacity monitoring.
As you can imagine, what I have outlined here means that there are still some challenges around FCDs, especially for third-party backup vendors. There is still some work to be done. We continue to work with to address outstanding issues to enable them to back up FCDs. At the time of writing, there are still no backup vendors that support detached FCD backups.
Hopefully this has given you a good feel for where we are going with FCDs/IVDs. You will start to see more and more of them going forward as we build out the use cases mentioned at the beginning of this post, and address the limitations mentioned later in the post.
[Update] Just in case you weren’t confused enough by the interchangeable terms FCD and IVD, it would appear that in the Configuration Maximums Guide, we refer to these as Managed Virtual Disks. I’m trying to see if we can agree on one term to make this easier to figure out going forward.