Another Flash Array vendor that I wanted to meet with at this years VMworld in San Francisco was Violin Memory. For those of you who have been following the keynotes at VMworld 2012, one of the things which will have stood out will have been the 1 million IOPs from a single VM. Now 1 million IOPs isn’t something new. Last year, VMware’s performance team published a paper on how they achieved 1 million IOPs from a single vSphere 5.0 host running six virtual machines. But this year, we achieved 1 million IOPs from a single Virtual Machine. And guess…
During VMworld 2012 in San Francisco, I had a chance to catch up once again with the team from Tintri. My first introduction to Tintri was at last year’s VMworld, where they received runner-up in the ‘Hardware for Virtualization’ category by TechTarget for best of VMworld 2011. Well this year they went one better, and won the Best of VMworld 2012 Gold award in Hardware for Virtualization. And for good reason. Let’s see the enhancements to the Tintri platform over the last 12 months have brought.
Pure Storage are an all-flash enterprise storage company. I first met these guys at VMworld 2011 and was quite impressed by their product. Like many Flash Array vendors at the time, there wasn’t a great amount of vSphere integration features. However, with this latest release of Purity v2.5, Pure Storage are addressing this and more. I had a chance to meet and discuss these new features with Matt Kixmoeller & Ravi Venkat of Pure Storage recently. Not only are they now VMware-Ready certified, but they’ve got a whole bunch of integration features. Let’s have a look at the features that…
This is not really a storage feature per-se, but I am including it in this series of vSphere 5.1 storage enhancements simply because most of the work to support this 5 node Microsoft cluster framework was done in the storage layer.
This post will look at Storage DRS enhancements in vSphere 5.1.
In this post, I want to call out two important matters related to the vSphere 5.1 release & EMC storage. The first is related to Round Robin Path Policy changes, and the second relates to a VMFS5 volume creation issue.
Storage I/O Control (SIOC) was initially introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESXi hosts that had access to shared storage. It extended the familiar constructs of shares and limits, which existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. The purpose of SIOC is to address the ‘noisy neighbour’ problem, i.e. a low priority virtual machine impacting other higher priority virtual machines due to the nature of the application and its I/O running in that low…