Automating the IOPS setting in the Round Robin PSP

A number of you have reached out about how to change some of the settings around path policies, in particular how to set the default number of iops in the Round Robin path selection policy (PSP) to 1. While many of you have written scripts to do this, when you reboot the ESXi host, the defaults of the PSP are re-applied and then you have to run the scipts again to reapply the changes. Here I will show you how to modify the defaults so that when you unclaim/reclaim the devices, or indeed reboot the host, the desired settings come into effect immediately.

Continue reading

Pluggable Storage Architecture (PSA) Deep-Dive – Part 4

In this post, I want to look at some fail-over and load balancing specific to ALUA (Asymmetric Logical Unit Access) arrays. In PSA part 3, we took a look at the different Path Selection Plugins (PSP), but for the most part these were discussed in the context of Active/Active arrays (where the LUN is available on all paths to the array) and Active/Passive arrays (where the LUN is owned by one controller on the array, and is only visible on the paths to that controller). ALUA provides a standard way for discovering and managing multiple paths to LUNs. Prior to ALUA, hosts need to use array vendor-specific ways to inquiry target port state. ALUA Provides a standard way to allow devices to report the states of its ports to hosts. The state of ports can be used by hosts to prioritize paths and make fail-over/load balancing decisions.

Continue reading

Pluggable Storage Architecture (PSA) Deep-Dive – Part 3

So far in this series, we have looked at the Pluggable Storage Architecture (PSA) and MPPs (Multipath Plugins). We have delved into the Native Multipath Plugin (NMP), and had a look at its sub-plugins, the Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). We have seen how the PSA selects an MPP, and if that MPP is the NMP, how the NMP selects an SATP and PSP.

Note – if you are having trouble following all the acronyms, you are not the first. There is a glossary at the end of the first blog post. And if we haven’t had enough acronyms, you will more recently see the plugins referred to as an MEMs, Management Extension Modules.

MEMsHowever, these names never really caught on and the original names continue to be the ones which are commonly used. The next step is to examine the SATP and PSP in more detail.

Continue reading

Nimble Storage 2.0 Architecture Updates

Nimble StorageRegular readers of my VMware Storage Blog will be no stranger to Nimble Storage. I’ve blogged about them on a number of occasions. I first came across them at a user group meeting in the UK & I also wrote an article about them when they certified on VMware’s Rapid Desktop Program for VDI.

Nimble Storage have been in touch with me again to share details about their new 2.0 storage architecture. After a very interesting and informative chat with Wen Yu of Nimble, I’m delighted to be able to share these new enhancements with you, in this first post on my new blog site.

Nimble Storage’s new enhancements can be categorized into two areas. The first of these is a new scale out architecture and the second is further integration with vSphere.

Scale to Fit

Scale to Fit architecture is how Nimble Storage describe their new elastic scaling feature. It basically allows customers to scale out their storage on a particular dimension, be it capacity or performance. This new architecture allows customers to start with a small footprint, and then to scale performance and capacity. This can be done without having to migrate any data and without any Virtual Machine/application downtime. The great advantage of this of course is that it avoids over-provisioning of storage up front, keeping initial costs down. When additional performance or capacity is needed,  customers only need to grow on that dimension. This means that customers don’t pay for additional performance if they only need capacity, and vice-versa.

vSphere Integration Features

There are 3 new vSphere integration features to call out in this new release.

  1.  Nimble Storage have a new Storage Replication Adapter (SRA) for integrating with VMware Site Recovery Manager (SRM). Business Continuance and Disaster Recovery are essential features for any enterprise class storage array, and it is great to see that Nimble now offer full integration with VMware’s BC/DR flagship product.
  2. There are a number of additional VAAI offload primitives supported. The first of these is Hardware Assisted Locking (ATS) which enables ESXi hosts to offload VMFS volume locks to the Nimble storage array. The second is the UNMAP primitive, which enables VMFS volumes built on thin provisioned disks to do space reclamation after storage vMotion or VM deletion. If I remember correctly from previous conversations with Nimble, they already support the WRITE_SAME primitive.
  3. This last feature is the one I am most excited about. Nimble Storage now offer their own Path Selection Plugin (PSP) into the Pluggable Storage Architecture of the VMkernel. This optimized multipathing plugin will load balance I/O, and provide linear performance scalability with a single Nimble storage array or multiple storage arrays in a scale-out cluster. The PSP is called Nimble_PSP_Directed.

Nimble Storage are a sponsor at the VMworld 2012. You’ll find them at booth 306 at the US conference this year.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter:  @CormacJHogan