vSphere 5.5 Storage Enhancements Part 2: VMFS Heap

There have been some notable discussions about VMFS heap size and heap consumption over the past year or so. An issue with previous versions of VMFS heap meant that there were concerns when accessing above 30TB of open files from a single ESXi host. VMware released a number of patches to temporarily work around the issue. ESXi 5.0p5 & 5.1U1 introduced a larger heap size to deal with this. However, I’m glad to say that a permanent solution has been included in vSphere 5.5 in the form of dedicated slab for VMFS pointers and a new eviction process. I will…

Hot-Extending Large VMDKs in vSphere 5.5

In my recent post about the new large 64TB VMDKs available in vSphere 5.5, I mentioned that one could not hot-extend a VMDK (i.e. grow the VMDK while the VM is powered on) to the new larger size due to some Guest OS partition formats not being able to handle this change on-the-fly. The question was whether hot-extend was possible if the VMDK was already 2TB or more in size. I didn’t know the answer, so I decided to try a few tests on my environment.

vSphere 5.5 Storage Enhancements Part 1: 62TB VMDK

Regular readers will know that I’ve spent a lot of time recently posting around VSAN. But VSAN wasn’t the only announcement at VMworld 2013. We also announced the next release of vSphere – version 5.5. I now want to share with you a number of new storage enhancements which we have made in this latest release of vSphere. To begin with, we will look at a long-awaited feature, namely the ability to have virtual machine disk files that are larger than 2TB, the traditional maximum size of VMDKs.

VSAN Part 10 – Changing VM Storage Policy on-the-fly

This is quite a unique aspect of VSAN, and plays directly into what I believe to be one of the crucial factors of software defined storage. Its probably easier to use an example to explain the concept of how being able to change storage policies on the fly is such a cool feature. Let’s take a scenario where an administrator has deployed a VM with the default VM storage policy, which is that the VM Storage objects should have no disk striping and should tolerate one failure.The layout of the VM could look something like this: The admin then notices…

VSAN Part 9 – Host Failure Scenarios & vSphere HA Interop

In this next post, I will examine some failure scenarios. I will concentrate of ESXi host failures, but suffice to say that a disk or network failure can also have consequences for  virtual machines running on VSAN. There are two host failure scenarios highlighted below which can impact a virtual machine running on VSAN: An ESXi host, on which the VM is not running but has some of its storage objects, suffers a failure An ESXi host, on which the VM is running, suffers a failure Let’s look at these failures in more detail.

VSAN Part 8 – The role of the SSD

I will start with a caveat. The plan is to support both Solid State Disks and PCIe flash devices on VSAN. However, for the purposes of this post, I will refer to this flash resource as an SSD for simplicity. SSDs serve two purposes in VSAN. They act as both a read cache and a write buffer. This dramatically improves the performance of virtual machines running on the vsanDatastore. In some respects VSAN can be compared to a number of ‘hybrid’ storage solutions in the market, which also use a combination of SSD & HDD to boost the performance of…

VSAN Part 7 – Capabilities and VM Storage Policies

In this post, the VSAN capabilities are examined in detail. These capabilities, which are surfaced by the VASA storage provider when the cluster is configured successfully, are used to set the availability, capacity & performance policies on a per VM basis when that VM is deployed on the vsanDatastore. There are 5 capabilities in the initial release of VSAN, as shown below. I will also try to highlight where you should use a non-default value for these capabilities.