Auto LUN Discovery on ESXi hosts

Did you know that any newly presented LUNs/paths added to an already discovered target will automatically be discovered by your ESXi host without a rescan of the SAN? In this example, I currently see two iSCSI LUNs from my NetApp array: Let’s see what happens when I add new devices to my ESXi host from a new target.

VOMA – Found X actively heartbeating hosts on device

One of the long-awaited features introduced with vSphere 5.1 was VOMA (vSphere On-disk Metadata Analyzer). This is essentially a filesystem checker for both the VMFS metadata and the LVM (Logical Volume Manager). Now, if you have an outage either at the host or storage side, you have a mechanism to verify the integrity of your filesystems once everything comes back up. This gives you peace of mind when wondering if everything is ok after the outage. There is a requirement however to have the VMFS volume quiesced when running the VOMA utility. This post will look at some possible reasons…

Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?

vCenter Server 5.1.0b Released

This is a follow-up to my previous post on the 5.0U2. At the same time, VMware also released vCenter 5.1.0b. This post will look at the storage items which were addressed in that update, although the issues that are addressed in the storage space are relatively minor compared to the enhancements made in other areas. Note that this update is for vCenter only – there is no ESXi 5.1 update.

vCenter 5.0U2 and ESXi 5.0U2 Released

Hi all, Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.

NFS Best Practices – Part 4: Sizing Considerations

Welcome to the next installment of NFS Best Practices. In this fourth and final best practice section on NFS, I asked a number of our major NAS storage partners some sizing questions. The questions basically fell into 3 different categories: Do you have a recommended NFS volume size/sweet-spot recommendation? Do you have a volume block size recommendation that should be configured on the array? Do you have a recommended number of VMs per NFS datastore? In fact, the responses from the vendors were all pretty similar. Let’s take a look at what they had to say.

Could not detect setting of sitpua for device naa.xxx. Error Not supported.

I recently received a question about the following message appearing in the VMkernel logs of an ESXi host: 2012-12-07T12:15:58.994Z cpu17:420151)ScsiDeviceIO: 6340: Could not detect setting of sitpua for device naa.xxx. Error Not supported. So what does that mean? Firstly, it isn’t anything to be greatly concerned about. SITPUA, short for Single Initiator Thin Provisioning Unit Attention, is related to Out Of Space (OOS) conditions on Thin Provisioned LUNs. To ensure that an Out Of Space (OOS) warning is sent to just one host using the affected LUN,  the SITPUA bit in the Thin Provisioning Mode Page must be set to…