I was having a conversation with one of our tech support guys (Greg Williams) recently about the relaxation on the requirement to allow Raw Device Mappings (RDMs) to be presented to different hosts using different SCSI identifiers and still do vMotion operations in vSphere 5.5. You can read that post here where I described how the restriction has been relaxed. Greg mentioned that he was handling a case where customers wished to share a physical mode/passthru RDM between VMs on different ESXi hosts with a view to running Microsoft Clustering Services (MSCS) on top. We call this CAB or Clustering…
My good pal Duco Jaspars pinged me earlier this week about an issue that was getting a lot of discussion in the VMware community. Duco also pointed me to a blog post by Andreas Peetz where he described the issue in detail here. The symptom is that the ESXi hostd process becomes unresponsive when software iSCSI is enabled. There is another symptom where an ESXi boot hangs after message “iscsi_vmk loaded successfully” or “vmkibft loaded successfully”. This has only been only observed with the ESXi 5.5 U1 Driver Rollup ISO. It has not been reported by customers using the standard…
I was going to make this part 11 of my vSphere 5.5 Storage Enhancements series, but I thought that since this is such a major enhancement to storage in vSphere 5.5, I’d put a little more focus on it. vFRC, short for vSphere Flash Read Cache, is a mechanism whereby the read operations of your virtual machine are accelerated by using an SSD or a PCIe flash device to cache the disk blocks of the application running in the Guest OS of your virtual machine. Now, rather than going to magnetic disk to read a block of data, the data…
A short and sweet post today. In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, VMware supported these 16Gb HBAs running at 16Gb. However, an important point to note is that there was no support for full end-to-end 16Gb connectivity from host to array in vSphere 5.1. To get full bandwidth, you possible had to configure a number of 8Gb connections from the switch to the storage array. With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel. Get notification of…
We at VMware have been making considerable changes to the way that the All Paths Down (APD for short) and PDL (Permanent Device Loss) conditions are handled. In vSphere 5.1, we introduced a number of enhancements around APD, including timeouts for devices that entered into the APD state. I wrote about the vSphere 5.1 APD improvements here. In vSphere 5.5 we introduced yet another improvement to this mechanism, namely the automatic removal of devices which have entered the PDL state from the ESXi host.
This is a topic which has been discussed time and time again. It relates to an advanced storage parameter called Disk.SchedNumReqOutstanding, or DSNRO for short. There are a number of postings out there on the topic, without me getting into the details once again. If you wish to learn more about what this parameter does for you, I recommend reading this post on DSNRO from my good pal Duncan Epping. Suffice to say that this parameter is related to virtual machine I/O fairness. In this post, I’ll talk about changes to DSNRO in vSphere 5.5.
About a year ago I wrote an article stating that Raw Device Mappings (RDM) continued to rely on LUN IDs, and that if you wished to successfully vMotion a virtual machine with an RDM from one host to another host, you had to ensure that the LUN was presented in a consistent manner (including identical LUN IDs) to every host that you wished to vMotion to. I recently learnt that this restriction has been lifted in vSphere 5.5. To verify, I did a quick test, presenting the same LUN with a different LUN ID to two different hosts, using that…