This post will look at Storage DRS enhancements in vSphere 5.1.
In this post, I want to call out two important matters related to the vSphere 5.1 release & EMC storage. The first is related to Round Robin Path Policy changes, and the second relates to a VMFS5 volume creation issue.
Storage I/O Control (SIOC) was initially introduced in vSphere 4.1 to provide I/O prioritization of virtual machines running on a cluster of ESXi hosts that had access to shared storage. It extended the familiar constructs of shares and limits, which existed for CPU and memory, to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. The purpose of SIOC is to address the ‘noisy neighbour’ problem, i.e. a low priority virtual machine impacting other higher priority virtual machines due to the nature of the application and its I/O running in that low…
VAAI NAS introduced the ability to create LazyZeroedThick & EagerZeroedThick disks on NFS datastores. Without VAAI NAS, one can only create thin VMDKs on NFS datastores. For those of you who are using VAAI NAS plugins, there is an important note in the 5.0U1 release notes that you should be aware of. ESXi cannot distinguish between thick provision lazy zeroed and thick provision eager zeroed virtual disks on NFS datastores with Hardware Acceleration support
Let’s begin this post with a recap of the Storage vMotion enhancements made in vSphere 5.0. Storage vMotion in vSphere 5.0 enabled the migration of virtual machines with snapshots and also the migration of linked clones. It also introduced a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork a write to both source and destination using mirror mode. This means migrations can be done in a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy…
To build on 5.0 enhancements to make the life of a vSphere administrator easier from a storage perspective, vSphere 5.1 includes additional command for the diagnosis of various storage protocol issues from the ESXi host. This new functionality is called I/O Device Management (IODM).
There are a number of storage protocol enhancements in vSphere 5.1. Boot from Software FCoE vSphere 5.0 introduced a new software Fibre Channel over Ethernet (FCoE) adapter. A software FCoE adapter is software code that performs some of the FCoE processing & can be used with a number of NICs that support partial FCoE offload. The software adapter needs to be activated by the vSphere administrator before it can be used, similar to Software iSCSI.