A quick note to let you know about a new KB article that has recently been published which reports incorrect values for Outstanding IO in the VSAN Observer tool used for monitoring performance of VSAN deployments when using vSphere 5.5U2.
KB 2091979 reports the issue as follows:
Virtual SAN (VSAN) Observer graphs in the “VSAN Client”, “VSAN Disk”, “DOM Owner” or individual VSAN object on the “VM” tab show very high Outstanding I/O (OIO) value that is inconsistent with the actual I/O load.
Here is a sample screenshot from my VSAN environment running vSphere 5.5U2. As you can see the Outstanding IO values are off the scale:
Of course, this behaviour may lead to you “chasing your tail” so to speak when monitoring or troubleshooting VSAN, so we are working on getting this resolved asap. Check the KB article regularly for updates regarding a fix. In the meantime, understand that a high Outstanding IO count in VSAN Observer is expected and may not be the symptom of any underlying issue.
I’m a bit late in bringing this to your attention, but there is a potential issue with VASA storage providers disconnecting from vCenter resulting in no VSAN capabilities being visible when you try to create a VM Storage Policy. These storage providers (there is one on each ESXi host participating in the VSAN Cluster) provide out-of-band information about the underlying storage system, in this case VSAN. If there isn’t at least one of these providers on the ESXi hosts communicating to the SMS (Storage Monitoring Service) on vCenter, then vCenter will not be able to display any of the capabilities of the VSAN datastore, which means you will be unable to build any further storage policies for virtual machine deployments (currently deployed VMs already using VM Storage Policies are unaffected). Even a resynchronization operation fails to reconnect the storage providers to vCenter. This seems to be predominantly related to vCenter servers which were upgraded to vCenter 5.5U1 and not newly installed vCenter servers.
We’ve seen a spate of incidents recently related to the HP Smart Array Drivers that are shipped as part of ESXi 5.x. Worst case scenario – this is leading to out of memory conditions and a PSOD (Purple Screen of Death) on the ESXi host in some cases. The bug is in the hpsa 220.127.116.11-1 driver and all Smart Array controllers that use this driver are exposed to this issue. For details on the symptom, check out VMware KB article 2075978.
HP have also released a Customer Advisory c04302261 on the issue.
This was a tricky one to deal with, as one possible step might be to roll back/downgrade the driver to an earlier version. Unfortunately, not only is this not supported (or documented), but you might also find that an older driver may not work with a newer storage controller. The good news is that HP now have a new version of the driver available which fixes the issue. Customers should upgrade to HP Smart Array Controller Driver (hpsa) Version 18.104.22.168-1 (ESXi 5.0 and ESXi 5.1) or Version 22.214.171.124-1 (ESXi 5.5). Details on where to locate the driver and how to upgrade it are located in their advisory. Think about doing this as soon as possible.
Very quick update …
Many readers will be aware of an ongoing issue with NFS in ESXi 5.5U1. My colleague, Duncan, wrote an article about it on his blog site recently entitled – Alert: vSphere 5.5 & NFS issue. Essentially, your NFS datastore may experience an APD (All Paths Down) condition. The issue is also described in KB article 2076392.
I’m pleased to say that VMware has now produced a patch to address this issue. The patch is 5.5EP4 (June 2014) and can be downloaded from VMware’s patch repository site here and will address this issue. Search on ESXi (Embedded and Installable), version 5.5.0. Another KB article, 2077360, has more information about the patch fix.
We just got notification about a potential issue with the VAAI UNMAP primitive when used on EMC VMAX storage systems with Enginuity version 5876.159.102 or later. It seems that during an ESXi reboot, or during a device ATTACH operation, the ESXi may report corruption. The following is an overview of the details found in EMC KB 184320. Other symptoms include vCenter operations on virtual machines fail to complete and the following errors might be found in the VMkernel logs:
WARNING: Res3: 6131: Invalid clusterNum: expected 2624, read 0
[type 1] Invalid totalResources 0 (cluster 0).[type 1] Invalid
nextFreeIdx 0 (cluster 0).
WARNING: Res3: 3155: Volume aaaaaaaa-bbbbbbbb-cccc-dddddddddddd
("datastore1") might be damaged on the disk. Resource cluster
metadata corruption has been detected
My good pal Duco Jaspars pinged me earlier this week about an issue that was getting a lot of discussion in the VMware community. Duco also pointed me to a blog post by Andreas Peetz where he described the issue in detail here.
The symptom is that the ESXi hostd process becomes unresponsive when software iSCSI is enabled. There is another symptom where an ESXi boot hangs after message “iscsi_vmk loaded successfully” or “vmkibft loaded successfully”. This has only been only observed with the ESXi 5.5 U1 Driver Rollup ISO. It has not been reported by customers using the standard ESXi 5.5U1 media. The VMware ESXi 5.5 Update 1 Driver Rollup provides an installable ESXi ISO image that includes drivers for various products produced by VMware partners.
Initially it was reported in the community that it appeared to be an issue with the Diablo TeraDimm driver that was shipped as part of the roll-up. However further investigation has concluded that the Emulex
So for those of you that plan a 5.5U1 deployment and also use software iSCSI, heads up if you plan on using the ESXi 5.5U1 Driver Rollup ISO (which is only supported for use with new installs by the way, and not upgrades).
[Updated] A new ESXi 5.5 Update 1 Driver Rollup 2 was uploaded on April 24th. This addresses this issues reported in this post.
be2iscsi driver is at fault and is the root cause. VMware support are recommending that you use an updated be2iscsi driver as per KB article 2075171 to address the issue.
Hmm, it seems to be the week that’s in it for storage issues. After publishing the DELL EQL & VMFS issue earlier this week, I have now been given a heads-up on an EMC VNXe & iSCSI issue. The symptoms are ESXi hosts being unable to boot from an iSCSI LUN on the VNXe or ESXi hosts losing connectivity to iSCSI datastores.