I thought it was about time that I looked at some of the larger storage vendors closer to home. One of these is of course Bull. This company is probably more familiar to those of us based in Europe rather than those of you based in the Americas or Asia Pacific. However VMware customers in EMEA will have seen them in the Solutions Exchange at VMworld Europe, where they have a reasonably large presence. After some conversation with my good pal Didier Pironet, whom I’ve met at a couple of recent VMUGs, I was introduced to Philippe Reynier who is a manager in the Bull StorWay Competence Center and Solution Center. Philippe provided me with a lot of good detail on Bull’s storage solutions which I will share with you here.
There are many occasions where the information displayed in the vSphere client is not sufficient to display all relevant information about a particular storage device, or indeed to troubleshoot problems related to a storage device. The purpose of this post is to explain some of the most often used ESXCLI commands that I use when trying to determine storage device information, and to troubleshoot a particular device.
This is a topic which has been discussed time and time again. It relates to an advanced storage parameter called Disk.SchedNumReqOutstanding, or DSNRO for short. There are a number of postings out there on the topic, without me getting into the details once again. If you wish to learn more about what this parameter does for you, I recommend reading this post on DSNRO from my good pal Duncan Epping. Suffice to say that this parameter is related to virtual machine I/O fairness. In this post, I’ll talk about changes to DSNRO in vSphere 5.5.
A number of you have reached out about how to change some of the settings around path policies, in particular how to set the default number of iops in the Round Robin path selection policy (PSP) to 1. While many of you have written scripts to do this, when you reboot the ESXi host, the defaults of the PSP are re-applied and then you have to run the scipts again to reapply the changes. Here I will show you how to modify the defaults so that when you unclaim/reclaim the devices, or indeed reboot the host, the desired settings come into effect immediately.
In this post, I want to look at some fail-over and load balancing specific to ALUA (Asymmetric Logical Unit Access) arrays. In PSA part 3, we took a look at the different Path Selection Plugins (PSP), but for the most part these were discussed in the context of Active/Active arrays (where the LUN is available on all paths to the array) and Active/Passive arrays (where the LUN is owned by one controller on the array, and is only visible on the paths to that controller). ALUA provides a standard way for discovering and managing multiple paths to LUNs. Prior to ALUA, hosts need to use array vendor-specific ways to inquiry target port state. ALUA Provides a standard way to allow devices to report the states of its ports to hosts. The state of ports can be used by hosts to prioritize paths and make fail-over/load balancing decisions.
So far in this series, we have looked at the Pluggable Storage Architecture (PSA) and MPPs (Multipath Plugins). We have delved into the Native Multipath Plugin (NMP), and had a look at its sub-plugins, the Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). We have seen how the PSA selects an MPP, and if that MPP is the NMP, how the NMP selects an SATP and PSP.
Note – if you are having trouble following all the acronyms, you are not the first. There is a glossary at the end of the first blog post. And if we haven’t had enough acronyms, you will more recently see the plugins referred to as an MEMs, Management Extension Modules.
As I highlighted in the PSA part 1 post, NMP, short for Native Multipath Plugin, is the default Multipath Plugin shipped with ESXi hosts. Once the PSA has associated the NMP with particular paths, it uses a number of sub-plugins to handle load balancing and path fail-over. In this post, I will look at the NMP in more detail. I will pay specific attention to the activity of the Storage Array Type Plugin (SATP) which is responsible for handling path fail-over for a given storage array and also the Path Selection Plugin (PSP), which determines which physical path is used to issue an I/O request (load balancing) to a storage device.