vSOM and VDPA Releases

In case you missed it, VMware released two interesting additions to the product portfolio last week. vSOM – vSphere with Operations Manager It became apparent that vCenter’s UI, while adequate for a lot of activities such as VM provisioning, does not provide the level of detail or forecasting required to manage large and complex infrastructures on a day to day basis. This is why VMware has introduced a new SKU which bundles the vCenter Operations product with vSphere. vCenter Operations Manager (vCOps) is the most comprehensive product for monitoring the health and performance of your virtual infrastructure. It  also monitors…

Software iSCSI and IPv6 Support

A little while ago, I researched a support statement regarding Software iSCSI & IPsec. After digging around a bit, I found out that the answer was no, it is not supported since we have not yet done a complete set of tests on this combination of products/features. However, in the course of my research, I came across some conflicting support statements about Software iSCSI & IPv6. KB article 1010812 – IPv6 Storage (Software iSCSI and NFS) is experimental in ESX 4.0 KB article 1021769 – VMware vSphere ESX/ESXi 4.1 supports IPv6 for use with the Service Console and VMkernel management…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 4

In this post, I want to look at some fail-over and load balancing specific to ALUA (Asymmetric Logical Unit Access) arrays. In PSA part 3, we took a look at the different Path Selection Plugins (PSP), but for the most part these were discussed in the context of Active/Active arrays (where the LUN is available on all paths to the array) and Active/Passive arrays (where the LUN is owned by one controller on the array, and is only visible on the paths to that controller). ALUA provides a standard way for discovering and managing multiple paths to LUNs. Prior to…

Heads Up! NetApp NFS Disconnects

I just received notification about KB article 2016122 which VMware has just published. It deals with a topic that I’ve seen discussed recently on the community forums. The symptom is that during periods of high I/O, NFS datastores from NetApp arrays become unavailable for a short period of time, before becoming available once again. This seems to be primarily observed when the NFS datastores are presented to ESXi 5.x hosts. The KB article described a work-around for the issue which is to tune the queue depth size on the ESXi hosts which will reduce I/O congestion to the datastore. By…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 3

So far in this series, we have looked at the Pluggable Storage Architecture (PSA) and MPPs (Multipath Plugins). We have delved into the Native Multipath Plugin (NMP), and had a look at its sub-plugins, the Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). We have seen how the PSA selects an MPP, and if that MPP is the NMP, how the NMP selects an SATP and PSP. Note – if you are having trouble following all the acronyms, you are not the first. There is a glossary at the end of the first blog post. And if we…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 2

As I highlighted in the PSA part 1 post, NMP, short for Native Multipath Plugin, is the default Multipath Plugin shipped with ESXi hosts. Once the PSA has associated the NMP with particular paths, it uses a number of sub-plugins to handle load balancing and path fail-over. In this post, I will look at the NMP in more detail. I will pay specific attention to the activity of the Storage Array Type Plugin (SATP) which is responsible for handling path fail-over for a given storage array and also the Path Selection Plugin (PSP), which determines which physical path is used…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 1

In this next series of blog articles, I am going to take a look at VMware’s Pluggable Storage Architecture, more commonly referred to as the PSA. The PSA was first introduced with ESX 4.0 and can be thought of as a set of APIs that allows third-party code to be inserted directly into the storage I/O path. Why would VMware want to allow this? The reason is straight forward. This allows 3rd party software developers (typically storage hardware vendors) to design their own load balancing techniques and fail-over mechanisms for their own storage arrays. It also means that 3rd party…