During VMworld 2012 in San Francisco, I had a chance to catch up once again with the team from Tintri. My first introduction to Tintri was at last year’s VMworld, where they received runner-up in the ‘Hardware for Virtualization’ category by TechTarget for best of VMworld 2011. Well this year they went one better, and won the Best of VMworld 2012 Gold award in Hardware for Virtualization. And for good reason. Let’s see the enhancements to the Tintri platform over the last 12 months have brought.
Pure Storage are an all-flash enterprise storage company. I first met these guys at VMworld 2011 and was quite impressed by their product. Like many Flash Array vendors at the time, there wasn’t a great amount of vSphere integration features. However, with this latest release of Purity v2.5, Pure Storage are addressing this and more. I had a chance to meet and discuss these new features with Matt Kixmoeller & Ravi Venkat of Pure Storage recently. Not only are they now VMware-Ready certified, but they’ve got a whole bunch of integration features. Let’s have a look at the features that…
This is not really a storage feature per-se, but I am including it in this series of vSphere 5.1 storage enhancements simply because most of the work to support this 5 node Microsoft cluster framework was done in the storage layer.
This post will look at Storage DRS enhancements in vSphere 5.1.
In this post, I want to call out two important matters related to the vSphere 5.1 release & EMC storage. The first is related to Round Robin Path Policy changes, and the second relates to a VMFS5 volume creation issue.
Storage I/O Control (SIOC) was initially introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESXi hosts that had access to shared storage. It extended the familiar constructs of shares and limits, which existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. The purpose of SIOC is to address the ‘noisy neighbour’ problem, i.e. a low priority virtual machine impacting other higher priority virtual machines due to the nature of the application and its I/O running in that low…
VAAI NAS introduced the ability to create LazyZeroedThick & EagerZeroedThick disks on NFS datastores. Without VAAI NAS, one can only create thin VMDKs on NFS datastores. For those of you who are using VAAI NAS plugins, there is an important note in the 5.0U1 release notes that you should be aware of. ESXi cannot distinguish between thick provision lazy zeroed and thick provision eager zeroed virtual disks on NFS datastores with Hardware Acceleration support