I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.
There was an interesting question posted recently around how you could monitor Storage I/O Control activity. Basically, how would one know if SIOC had kicked in and was actively throttling I/O queues? Well, in vSphere 5.1, there are some new performance counters that can help you with that.
vSphere 5.1 introduced a number of vCloud Director (vCD) interoperability features from a storage perspective, namely ability to take VM snapshots from within the vCD UI, interoperability with Storage Profiles and interoperability with Storage DRS. Admittedly, its been a while since I played with vCD and I am a little rusty, but I wanted to see how well these storage features worked with vCD 5.1. I’ll follow-up with some future posts on how this all integrates, but this first post is just to highlight an issue I ran into in my haste to get the environment up and running.
This post will look at Storage DRS enhancements in vSphere 5.1.
Storage I/O Control (SIOC) was initially introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESXi hosts that had access to shared storage. It extended the familiar constructs of shares and limits, which existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. The purpose of SIOC is to address the ‘noisy neighbour’ problem, i.e. a low priority virtual machine impacting other higher priority virtual machines due to the nature of the application and its I/O running in that low…
Let’s begin this post with a recap of the Storage vMotion enhancements made in vSphere 5.0. Storage vMotion in vSphere 5.0 enabled the migration of virtual machines with snapshots and also the migration of linked clones. It also introduced a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork a write to both source and destination using mirror mode. This means migrations can be done in a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy…
To build on 5.0 enhancements to make the life of a vSphere administrator easier from a storage perspective, vSphere 5.1 includes additional command for the diagnosis of various storage protocol issues from the ESXi host. This new functionality is called I/O Device Management (IODM).