Heads Up! New Patches for VMFS heap

Many of you in the storage field will be aware of a limitation with the maximum amount of open files on a VMFS volume. It has been discussed extensively, with a blog articles on the vSphere blog by myself, but also articles by such luminaries as Jason Boche and Michael Webster.

In a nutshell, ESXi has a limited amount of VMFS heap space by default. While you can increase it from the default to the maximum, there are still some gaps. When you create very many VMDKs on a very large VMFS volume, the double indirect pointer mechanism to address the blocks way out in the address space consume heap. The result is that although we supported very large VMFS volumes (up to 64TB), the reality up to now is that a single host (since heap is defined on a per host basis) could only address in the region of 30TB of open files. This isn’t always an issue, since typically VMFS is a clustered file system and is shared by many hosts. Therefore one would typically have the open VMDKs spread across many hosts in a cluster. However it is an issue for stand-alone hosts with lots of virtual machines with lots of VMDKs, and is also an issue for hosts which want to have a virtual machine with a lot of VMDKs attached, for the purposes of a file share for example.

Anyway, to cut to the chase, a recent patch release for ESXi 5.0 increases the default heap size to 256MB and maximum heap size to 640MB per ESXi host. This should allow a single ESXi host to access in the region of 60TB open VMDK. Previously the default was 80MB and the maximum was 256MB, so we have increased this significantly. This is pretty much the maximum size of the VMFS volume anyway. The patch is Patch ESXi500-201303401-BG.

Although the patch for ESXi 5.1 is not yet out, it should be available very shortly, and will have a similar fix.

For those of you using very large VMFS volume with lots of virtual machines disk files, consider scheduling a maintenance slot very soon to apply these patches. This is not an issue for NFS by the way.