Hmm, it seems to be the week that’s in it for storage issues. After publishing the DELL EQL & VMFS issue earlier this week, I have now been given a heads-up on an EMC VNXe & iSCSI issue. The symptoms are ESXi hosts being unable to boot from an iSCSI LUN on the VNXe or ESXi hosts losing connectivity to iSCSI datastores.
Our GSS folks just released KB article 2049103 which details a VMFS Heartbeat and Lock corruption issue that manifests itself on DELL EqualLogic storage arrays when running PS Series firmware v6.0.6. As per the KB:
A VMFS datastore has a region designated for heartbeat types of operations to ensure that distributed access to the volume occurs safely. When files are being updated, the heartbeat region for those files is locked by the host until the update is complete.
In this scenario, the heartbeat region has become corrupt.
I just got a notification about this myself today. Apparently there is some interoperability issues with VAAI (vSphere APIs for Array Integration) & EMC RecoverPoint on EMC VNX arrays. It looks like the VNX Storage Processor (SP) may reboot with Operating Environment Release 32 P204 in a RecoverPoint environment.
EMC have just released today a technical advisory – ETA emc327099 – which describes the issue in more detail but is basically advising customers to disable VAAI on all ESXi hosts in the RecoverPoint environment while they figure this out. Hopefully it won’t take too long to come up with a solution to allow VAAI run in these environments once again.
Thanks to our friends over at EMC (shout out to Itzik), we’ve recently been made aware of a limitation on our UNMAP mechanism in ESXi 5.0 & 5.1. It would appear that if you attempt to reclaim more than 2TB of dead space in a single operation, the UNMAP primitive is not handling this very well. The current thought is that this is because we have a 2TB (- 512 byte) file size limit on VMFS-5. When the space to reclaim is above this size, we cannot create the very large temporary balloon file (part of the UNMAP process), and it spews the following errors:
Just thought I’d bring to your attention something that has been doing the rounds here at VMware recently, and will be applicable to those of you using QLogic HBAs with ESXi 5.x. The following are the device queue depths you will find when using QLogic HBAs for SAN connectivity:
- ESXi 4.1 U2 – 32
- ESXi 5.0 GA – 64
- ESXi 5.0 U1 – 64
- ESXi 5.1 GA – 64
The higher depth of 64 has been this way since 24 Aug 2011 (the 5.0 GA release). The issue is that this has not been documented anywhere. For the majority of users, this is not an area of concern and is probably a benefit. But there are some concerns.
Many of you in the storage field will be aware of a limitation with the maximum amount of open files on a VMFS volume. It has been discussed extensively, with a blog articles on the vSphere blog by myself, but also articles by such luminaries as Jason Boche and Michael Webster.
In a nutshell, ESXi has a limited amount of VMFS heap space by default. While you can increase it from the default to the maximum, there are still some gaps. When you create very many VMDKs on a very large VMFS volume, the double indirect pointer mechanism to address the blocks way out in the address space consume heap. The result is that although we supported very large VMFS volumes (up to 64TB), the reality up to now is that a single host (since heap is defined on a per host basis) could only address in the region of 30TB of open files. This isn’t always an issue, since typically VMFS is a clustered file system and is shared by many hosts. Therefore one would typically have the open VMDKs spread across many hosts in a cluster. However it is an issue for stand-alone hosts with lots of virtual machines with lots of VMDKs, and is also an issue for hosts which want to have a virtual machine with a lot of VMDKs attached, for the purposes of a file share for example.
Anyway, to cut to the chase, a recent patch release for ESXi 5.0 increases the default heap size to 256MB and maximum heap size to 640MB per ESXi host. This should allow a single ESXi host to access in the region of 60TB open VMDK. Previously the default was 80MB and the maximum was 256MB, so we have increased this significantly. This is pretty much the maximum size of the VMFS volume anyway. The patch is Patch ESXi500-201303401-BG.
Although the patch for ESXi 5.1 is not yet out, it should be available very shortly, and will have a similar fix.
For those of you using very large VMFS volume with lots of virtual machines disk files, consider scheduling a maintenance slot very soon to apply these patches. This is not an issue for NFS by the way.
When setting up a Microsoft Cluster with nodes running in vSphere Virtual Machines across ESXi hosts, I have come across folks who have experienced Incompatible device backing specified for device ‘0’ errors. These are typically a result of the RDM (Raw Device Mapping) setup not being quite right. There can be a couple of reasons for this, as highlighted here.
Different SCSI Controller
On one occasion, the RDM was mapped to the same SCSI controller as the Guest OS boot disk. Once the RDM was moved to its own unique SCSI controller, it resolved the issue. Basically, if the OS disk is configured to use SCSI 0:0, then you cannot put the RDM on SCSI 0:1, or SCSI 0:2. You must put the RDM on SCSI 1:x or SCSI 2:x.
Matching LUN ID
Another reason for the above error is when the RDM is presented to the different ESXi hosts using a different LUN ID. The RDM must be presented to all ESXi hosts (and thus all MSCS nodes) using the same LUN ID.