Heads Up! Device Queue Depth on QLogic HBAs

Just thought I’d bring to your attention something that has been doing the rounds here at VMware recently, and will be applicable to those of you using QLogic HBAs with ESXi 5.x. The following are the device queue depths you will find when using QLogic HBAs for SAN connectivity: ESXi 4.1 U2 – 32 ESXi 5.0 GA – 64 ESXi 5.0 U1 – 64 ESXi 5.1 GA – 64 The higher depth of 64 has been this way since 24 Aug 2011 (the 5.0 GA release). The issue is that this has not been documented anywhere. For the majority of…

Heads Up! New Patches for VMFS heap

Many of you in the storage field will be aware of a limitation with the maximum amount of open files on a VMFS volume. It has been discussed extensively, with a blog articles on the vSphere blog by myself, but also articles by such luminaries as Jason Boche and Michael Webster. In a nutshell, ESXi has a limited amount of VMFS heap space by default. While you can increase it from the default to the maximum, there are still some gaps. When you create very many VMDKs on a very large VMFS volume, the double indirect pointer mechanism to address…

Microsoft Clustering on vSphere – Incompatible Device Errors

When setting up a Microsoft Cluster with nodes running in vSphere Virtual Machines across ESXi hosts, I have come across folks who have experienced Incompatible device backing specified for device ‘0’ errors. These are typically a result of the RDM (Raw Device Mapping) setup not being quite right. There can be a couple of reasons for this, as highlighted here. Different SCSI Controller On one occasion, the RDM was mapped to the same SCSI controller as the Guest OS boot disk. Once the RDM was moved to its own unique SCSI controller, it resolved the issue. Basically, if the OS disk…

Heads Up! NetApp NFS Disconnects

I just received notification about KB article 2016122 which VMware has just published. It deals with a topic that I’ve seen discussed recently on the community forums. The symptom is that during periods of high I/O, NFS datastores from NetApp arrays become unavailable for a short period of time, before becoming available once again. This seems to be primarily observed when the NFS datastores are presented to ESXi 5.x hosts. The KB article described a work-around for the issue which is to tune the queue depth size on the ESXi hosts which will reduce I/O congestion to the datastore. By…

Heads Up! Nutanix NOS v2.6.4 now available

Nutanix have informed me that they have a new release available – Nutanix OS 2.6.4 (NOS is the new name for the previously named Nutanix Complete Cluster). They are looking for all their customers to proactively move to this new release. Although Nutanix also have NOS 3.0 release on the cards, existing customers will first need to move to version 2.6.4 in order to be in a position to migrate to 3.0. If that is not reason enough, the 2.6.4 release also includes the following new features:

Heads Up! VOMA – ERROR: Trying to do IO beyond device Size

This is a short note on trying to use VOMA, the vSphere On-disk Metadata Analyzer, on a dump taken from a VMFS-5 volume which was upgraded from VMFS-3. This is not an issue if VOMA is run directly on the volume; it is only an issue if a dump is taken from the volume and then you try to run VOMA on the dump. It may error during phase 1- ‘Checking VMFS header and resource files‘ with an error ‘ERROR: Trying to do IO beyond device Size‘. When a VMFS-3 is upgraded to VMFS-5, a new system file, pb2.sf, is…

Heads Up! Storage DRS scheduler removes rules

One of my friends over in VMware support just gave me a heads-up on this issue. It affects virtual machines with anti-affinity VMDK rules defined (for keeping virtual machine disks on different datastores in a datastore cluster) when changing the Storage DRS automation level via scheduled tasks. The rule will be removed if you use the Storage DRS scheduler to change Storage DRS automation level from automatic to manual and then back to automatic. The result is that a VM which had an anti-affinity rule and has its VMDKs on different datastores could end up with its VMDKs on the…