vSphere 5.5 Storage Enhancements Part 1: 62TB VMDK

Regular readers will know that I’ve spent a lot of time recently posting around VSAN. But VSAN wasn’t the only announcement at VMworld 2013. We also announced the next release of vSphere – version 5.5. I now want to share with you a number of new storage enhancements which we have made in this latest release of vSphere. To begin with, we will look at a long-awaited feature, namely the ability to have virtual machine disk files that are larger than 2TB, the traditional maximum size of VMDKs.

Although we have introduced the new VMDK size, for the most part Virtual Machine behavior is unchanged. You can still do snapshots, suspends, migrations, etc, with this new size, but there are some considerations to take into account. I’ve listed them here:

What is supported?

  • NFS & VMFS-5 datastores both support the new 62TB size
  • If a VM is powered off, its VMDK can be  extended to the new 62TB size
  • vMotion is fully supported, but you will not be allowed to migrate a VM with a 2TB+ VMDK to a non vSphere 5.5 host. VMs with VMDKs < 2TB can migrate to  vSphere hosts running earlier versions of ESXi (assuming that the HW version is supported)
  • Storage vMotion is fully supported, but be aware that Storage vMotions of larger VMDKs can now take longer to complete (obviously).
  • VM Snapshots are fully supported, but note that the larger VMDKs will now leverage the new SE Sparse Disk format introduced in vSphere 5.1 rather than the old redo log (vmfsSparse) format. Without again stating the obvious, while the creation of snapshots is very quick, consolidation of these larger snapshots can still take some time.
  • Cloning works as expected, but if you have Eager Zeroed Thick disks (EZT), cloning can again take some time on the very large VMDKs as it writes zeroes to every block. VAAI (and some array based technologies) can alleviate some of this overhead
  • DRS is fully supported
  • vSphere HA is fully supported
  • Site Recovery Manager/vSphere Replication both support the larger size VMDK
  • vSphere Flash Read Cache (vFRC) supports the larger VMDK size

What is not supported?

  • Management via the C# client. The larger VMDKs can only be managed via the vSphere web client
  • VMFS-3 does not support the larger VMDKs. It is still limited to 2TB (- 512 byte) VMDKs
  • A VMDK cannot be hot extended (grow with VM powered on) to a larger VMDK size > 2TB. This is due to concerns around a Guest OS doing ‘live’ GPT partition header updates.
  • Similarly, Guest OS which use the MBR partition format cannot support the larger VMDK size
  • Virtual Machines which use the BusLogic Virtual SCSI Bus Adapter does not support the larger VMDKs
  • vSphere Fault Tolerance (FT) does not support the larger VMDKs
  • Virtual SAN (VSAN) does not support the larger VMDKs
  • Redo Log format (vmfsSparse) snapshots cannot grow larger than 2TB

 Web Client Support Only

For some time now, VMware has made it clear that all new features/enhances supported via the vSphere web client. This is also true for the the new larger 62TB VMDKs. If you create a large VMDK via the vSphere web client, and then try to examine its properties via the C# client, you will see the following ‘out of range’ error:

out-of-rangeNote that you will not be able to create the larger VMDKs with the C# client – you will have to use the web client. If you log onto an ESXi 5.5 host using the C# client, a warning is now displayed about 5.5 features only being available via the vSphere web client.

host-loginWhile we readily admit that a number of products such as VUM (vSphere Update Manager) and SRM (Site Recovery Manager) still need integration into the web client, this is underway and customers should familiarize themselves as much as possible with the web client.

What about non-passthru RDMs?

We have supported 64TB pass thru RDMs (physical compatibility mode) since vSphere 5.0, the same time we introduced a 64TB VMFS volume on a single LUN. With the release of vSphere 5.5, we also have support for very large non-pass thru RDMs (virtual compatibility mode). These vRDMs in vSphere 5.5 can now assume the same size as a VMDK (~62TB).

15 Replies to “vSphere 5.5 Storage Enhancements Part 1: 62TB VMDK”

  1. A VMDK cannot be hot extended (grow with VM powered on) to a larger VMDK size > 2TB. This is due to concerns around a Guest OS doing ‘live’ GPT partition header updates. >> Once a VM has been updated to be larger than 2 TB during a powered off config change, can the disk then be expanded hot? (ie Disk is originally 1 TB – then it is expanded to 3 TB during power off – can you then expand to 6 TB while powered on and running?)
    Thanks.

  2. Hi Cormac, one quick question – how can a wipe an/or shrink/truncate operations for the SE Sparse Disks be triggered? Havent found any information on that so far (besides using Linked Clones and Horizon View 5.2). Many thanks!

  3. I can successfully create a 64TB datastore by using 84TB LUN. But, the maximum VMDK size I can create is 16TB with VM power off instead of 62TB using VM version8, ESXi 5.5 GA, and VC 5.5. What I miss here?

    1. Are you using the C# client Gary. If so, you should be using the web client since all new features (including larger VMDKs) are only manageable via the web client.

  4. Hi Cormac, you state “Cloning works as expected, but if you have Lazy Zeroed Thick disks (LZT), cloning can again take some time on the very large VMDKs as it writes zeroes to every block.”

    Do you mean Eagered Zerod disks? I had thought that lazy zeroed disks only wrote the zeros when it was needed.

    Thanks for the clarification,
    Joe

Comments are closed.