Microsoft Clustering on vSphere – Incompatible Device Errors

When setting up a Microsoft Cluster with nodes running in vSphere Virtual Machines across ESXi hosts, I have come across folks who have experienced Incompatible device backing specified for device ‘0’ errors. These are typically a result of the RDM (Raw Device Mapping) setup not being quite right. There can be a couple of reasons for this, as highlighted here. Different SCSI Controller On one occasion, the RDM was mapped to the same SCSI controller as the Guest OS boot disk. Once the RDM was moved to its own unique SCSI controller, it resolved the issue. Basically, if the OS disk…

Atlantis ILIO 4.0 – Persistent Desktops in Memory

At the start of this month, Atlantis Computing gave me a preview of their new ILIO Persistent VDI 4.0. As the title of this post suggests, Atlantis have a very nice new feature in this release. Last year, I blogged about their ILIO Diskless VDI for non-persistent desktops which ran purely in memory. That was quite a novel concept, and found affinity with a lot of customers (and won a number of awards too). However, many of their customers asked them to provide an in-memory solution for persistent desktops as well as non-persistent ones. With this release, Atlantis have responded…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 4

In this post, I want to look at some fail-over and load balancing specific to ALUA (Asymmetric Logical Unit Access) arrays. In PSA part 3, we took a look at the different Path Selection Plugins (PSP), but for the most part these were discussed in the context of Active/Active arrays (where the LUN is available on all paths to the array) and Active/Passive arrays (where the LUN is owned by one controller on the array, and is only visible on the paths to that controller). ALUA provides a standard way for discovering and managing multiple paths to LUNs. Prior to…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 3

So far in this series, we have looked at the Pluggable Storage Architecture (PSA) and MPPs (Multipath Plugins). We have delved into the Native Multipath Plugin (NMP), and had a look at its sub-plugins, the Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). We have seen how the PSA selects an MPP, and if that MPP is the NMP, how the NMP selects an SATP and PSP. Note – if you are having trouble following all the acronyms, you are not the first. There is a glossary at the end of the first blog post. And if we…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 2

As I highlighted in the PSA part 1 post, NMP, short for Native Multipath Plugin, is the default Multipath Plugin shipped with ESXi hosts. Once the PSA has associated the NMP with particular paths, it uses a number of sub-plugins to handle load balancing and path fail-over. In this post, I will look at the NMP in more detail. I will pay specific attention to the activity of the Storage Array Type Plugin (SATP) which is responsible for handling path fail-over for a given storage array and also the Path Selection Plugin (PSP), which determines which physical path is used…

Pluggable Storage Architecture (PSA) Deep-Dive – Part 1

In this next series of blog articles, I am going to take a look at VMware’s Pluggable Storage Architecture, more commonly referred to as the PSA. The PSA was first introduced with ESX 4.0 and can be thought of as a set of APIs that allows third-party code to be inserted directly into the storage I/O path. Why would VMware want to allow this? The reason is straight forward. This allows 3rd party software developers (typically storage hardware vendors) to design their own load balancing techniques and fail-over mechanisms for their own storage arrays. It also means that 3rd party…