vSphere 5.5 Storage Enhancement Part 6 – Rename Files using SvMotion

This is something which comes up a lot. In the past, many people used a by-product of the Storage vMotion operation to rename all of the files associated with a virtual machine. In this vSphere 5.1U1 post, I mentioned that we brought back this functionality but you had to set an advanced parameter to make it work. Well, in vSphere 5.5, it works without the advanced option. The following blog post shows you this rename of virtual machine files using Storage vMotion in vSphere 5.5 to rename all of the files associated with a virtual machine.

vSphere 5.5 Storage Enhancements Part 5: VR/ SvMotion/SDRS Interop

I wrote about this issue on the vSphere blog some time back. Essentially, the issue described in that post was if a VM that was being replicated via vSphere Replication was migrated to another datastore, it triggered a full resync because the persistent state files (psf) which tracks the changes were deleted. All of the disks contents are then reread and check-summed on each side. This can have a significant impact on vSphere Replication’s RPO (Recovery Point Objective).

vSphere 5.5 Storage Enhancements Part 4: UNMAP

Continuing on the series of vSphere 5.5 Storage Enhancements, we now come to a feature that is close to many people’s hearts. The vSphere Storage API for Array Integration (VAAI) UNMAP primitive reclaims dead or stranded space on a thinly provisioned VMFS volume, something that we could not do before this primitive came into existence. However, it has a long and somewhat checkered history. Let me share the timeline with you before I get into what improvements we made in vSphere 5.5.

vSphere 5.5 Storage Enhancements Part 2: VMFS Heap

There have been some notable discussions about VMFS heap size and heap consumption over the past year or so. An issue with previous versions of VMFS heap meant that there were concerns when accessing above 30TB of open files from a single ESXi host. VMware released a number of patches to temporarily work around the issue. ESXi 5.0p5 & 5.1U1 introduced a larger heap size to deal with this. However, I’m glad to say that a permanent solution has been included in vSphere 5.5 in the form of dedicated slab for VMFS pointers and a new eviction process. I will…

Hot-Extending Large VMDKs in vSphere 5.5

In my recent post about the new large 64TB VMDKs available in vSphere 5.5, I mentioned that one could not hot-extend a VMDK (i.e. grow the VMDK while the VM is powered on) to the new larger size due to some Guest OS partition formats not being able to handle this change on-the-fly. The question was whether hot-extend was possible if the VMDK was already 2TB or more in size. I didn’t know the answer, so I decided to try a few tests on my environment.

vSphere 5.5 Storage Enhancements Part 1: 62TB VMDK

Regular readers will know that I’ve spent a lot of time recently posting around VSAN. But VSAN wasn’t the only announcement at VMworld 2013. We also announced the next release of vSphere – version 5.5. I now want to share with you a number of new storage enhancements which we have made in this latest release of vSphere. To begin with, we will look at a long-awaited feature, namely the ability to have virtual machine disk files that are larger than 2TB, the traditional maximum size of VMDKs.