Pluggable Storage Architecture (PSA) Deep-Dive – Part 1

In this next series of blog articles, I am going to take a look at VMware’s Pluggable Storage Architecture, more commonly referred to as the PSA. The PSA was first introduced with ESX 4.0 and can be thought of as a set of APIs that allows third-party code to be inserted directly into the storage I/O path.

Why would VMware want to allow this? The reason is straight forward. This allows 3rd party software developers (typically storage hardware vendors) to design their own load balancing techniques and fail-over mechanisms for their own storage arrays. It also means that 3rd party vendors can now add support for new arrays into ESXi without having to provide internal information or intellectual property about the array to VMware.

There was another driving factor also. In the past, if one of our storage array partners wished to make a change to the way their particular array did fail-over or load balancing, it could trigger a re-certification of all arrays since they all shared the same SCSI mid-layer code in the VMkernel – a major undertaking as I am sure you can appreciate. The PSA allows vendor to make appropriate changes to their load balancing and fail-over mechanisms without impacting any other vendor.

Before we start, I need to give you a warning. I don’t believe there is any other component within VMware that has as many acronyms as the PSA. I’ll create a short list at the end of the post for reference. Hopefully the amount of acronyms won’t put you off the post too much. The following diagram shows the relationship between the PSA, NMP and some of NMP’s sub-plugins which we will discuss in a future post.

PSA

Role of the PSA

The PSA is responsible for a number of tasks within the VMkernel. Primarily, it is responsible for loading and unloading the multipath plugins, which includes the Native Multipath Plugin (NMP) from VMware and other Multipath Plugins (MPP) from third parties (if they are installed). It handles physical path discovery and removal (via scanning). As the PSA discovers available storage paths (and based on a set of predefined rules which I will show you shortly), it will determine which MPP should be given ownership of that path. It routes I/O requests for a specific logical device to an appropriate MPP. It handles I/O queuing to the physical storage adapters (HBA) & to the logical devices. It also implements logical device bandwidth sharing between Virtual Machines and provides logical device and physical path I/O statistics which can be viewed in esxtop and the vSphere client UI performance views.

Role of the MPP

Once an MPP is given ownership of a path by the PSA, the MPP is responsible for associating a set of physical paths with a logical storage device, or LUN. Its tasks involve managing physical path claiming and un-claiming as well as creating, registering, and de-registering logical devices on the host. It also process I/O requests to logical devices by selecting an optimal physical path for the request (load balance) and performing actions necessary to handle failures and request retries (fail-over). The MPP is also involved in management tasks such as the abort or reset of logical devices.

By default, each ESXi host ships with a default Multipath Plugin (MPP). This is called the Native Multipath Plugin (NMP). However, as mentioned, third parties can plugin their own software rather than use the default NMP. The most common MPPs from third parties are EMC PowerPath/VE & Symantec/Veritas DMP.

NMP Detail

The Native Multipath Plugin (NMP) supports all storage arrays listed on the VMware storage Hardware Compatibility List (HCL). NMP manages sub-plugins for handling multipathing and load balancing. The specific details of handling path fail-over for a given storage array are delegated to an NMP sub-plugin called a Storage Array Type Plugin (SATP). SATP is associated with paths. The specific details for determining which physical path is used to issue an I/O request (load balancing) to a storage device are handled by a sub-plugin of the NMP is called Path Selection Plugin (PSP). PSP is associated with logical devices. The SATP & PSP will be discussed in more detail in a future post.

Claimrules

Here is the default set of claim-rules from an ESXi 5.0 host:

~ # esxcli storage core claimrule list
Rule Class Rule Class   Type      Plugin     Matches
———————————————————————-
MP          0  runtime  transport  NMP      transport=usb
MP          1  runtime  transport  NMP      transport=sata
MP          2  runtime  transport  NMP      transport=ide
MP          3  runtime  transport  NMP      transport=block
MP          4  runtime  transport  NMP      transport=unknown
MP     101  runtime  vendor  MASK_PATH  vendor=DELL model=Universal Xport
MP     101  file          vendor  MASK_PATH  vendor=DELL model=Universal Xport
MP    65535  runtime  vendor   NMP       vendor=* model=*

The rules are matched in descending order, so basically if a device is discovered to be usb, sata, ide, block, or in fact has an unknown transport type, the PSA will assign the NMP ownership of the path. The  MASK_PATH rule is to hide controller/gateway devices from certain DELL storage.

The “runtime” rules are the ones currently in use by the VMkernel. The “file” claim-rules are the ones in the /etc/vmware/esx.conf file. Claim-rules must be loaded from the /etc/vmware/esx.conf file into the kernel, so the list of rules in each place may not always be the same.

The last rule will assign any other storage it discovers from any vendor and any model to the NMP. This is like a catch-all rule. Therefore, by default, every storage device is managed by the NMP. Then it is up to the NMP to see what it can do with it.

Let’s now look at the same set of rules from an ESXi host which has EMC PowerPath installed (I’ve truncated some of the output, fyi).

~ # esxcli storage core claimrule list
Rule Class Rule Class   Type      Plugin     Matches
———————————————————————-
MP          0  runtime  transport  NMP      transport=usb
MP          1  runtime  transport  NMP      transport=sata
MP          2  runtime  transport  NMP      transport=ide
MP          3  runtime  transport  NMP      transport=block
MP          4  runtime  transport  NMP      transport=unknown
MP      101  runtime  vendor  MASK_PATH  vendor=DELL model=Universal Xport
MP      101  file          vendor  MASK_PATH  vendor=DELL model=Universal Xport
MP      250  runtime  vendor    PowerPath   vendor=DGC model=*
MP      250  file          vendor   PowerPath   vendor=DGC model=*
MP      260  runtime  vendor    PowerPath   vendor=EMC model=SYMMETRIX
MP      260  file          vendor   PowerPath    vendor=EMC model=SYMMETRIX
MP      270  runtime  vendor    PowerPath    vendor=EMC model=Invista
MP      270  file          vendor    PowerPath    vendor=EMC model=Invista
…<snip>…
MP    65535  runtime  vendor     NMP          vendor=* model=*

As before, if a device is discovered to be usb, sata, ide, block, or in fact has an unknown transport type, the PSA will once again assign the NMP ownership of the path. However, if the vendor turns out to be DGC (short for Data General, which EMC purchased many moons ago), or indeed EMC, the PSA will assign PowerPath as the owner. There are in fact other array vendors listed too, since PowerPath will work on non-EMC arrays. And at the end of the list, if there are no other matches before that, we have our catch-all rule which assigns NMP to devices which haven’t matched any other rule.

In a future post, I’ll look at the NMP, SATP & PSP in more detail.

Acronyms

  • PSA – Pluggable Storage Architecture
  • MPP – Multipath Plugin
  • NMP – Native Multipath Plugin
  • SATP – Storage Array Type Plugin
  • PSP – Path Selection Plugin
  • MEM – Multipath Extension Module

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

13 Replies to “Pluggable Storage Architecture (PSA) Deep-Dive – Part 1”

  1. Hi Cormach

    Great post as always.

    I am wondering can you edit the claim rule’s i.e. if you had PP/VE or similar plugin installed could you force NMP to take over perhaps for troubleshooting ?

    1. Hi Damian,

      Yes, you can add, remove and change the claim rules. However I would only do this under the guidance of VMware GSS folks.
      As you can imagine, if you start doing things with the claim rules and make a mistake, the situation can become a lot worse.
      I believe one of the troubleshooting steps when dealing with 3rd party plugins is to remove the plugin completely and see if the issue persists with VMware’s default plugins.

  2. i don’t think so , you can not edit the rule which is already added, but you can move the added claim rule to higher number [to make the priority as low], and ensure that this claim rule number would be next to the available plugin rule[plugin you are interested for debugging].

    NMP has least priority , later this we cannot add any rule.

    1. Thanks for clarifying Dev. When I said you could change a rule, I was referring to the fact that you can use the ‘esxcli storage core claimrule move’ command to change the rule id.

Comments are closed.