QLogic Mt. Rainier Technology Preview

I was fortunate enough yesterday to get an introduction to QLogic’s new Mt. Rainier technology. Although Mt. Rainier allows for different configurations of SSD/Flash to be used, the one that caught my eye was the QLogic QLE10000 Series SSD HBAs. These have not started to ship as yet, but considering that the announcement was last September, one suspects that GA is not far off. As the name suggests, this is a PCIe Flash card, but QLogic have one added advantage – the flash is combined with the Host Bus Adapter, meaning that you get your storage connectivity and cache accelerator…

Heads Up! VOMA – ERROR: Trying to do IO beyond device Size

This is a short note on trying to use VOMA, the vSphere On-disk Metadata Analyzer, on a dump taken from a VMFS-5 volume which was upgraded from VMFS-3. This is not an issue if VOMA is run directly on the volume; it is only an issue if a dump is taken from the volume and then you try to run VOMA on the dump. It may error during phase 1- ‘Checking VMFS header and resource files‘ with an error ‘ERROR: Trying to do IO beyond device Size‘. When a VMFS-3 is upgraded to VMFS-5, a new system file, pb2.sf, is…

Condusiv V-locity 4 – New caching feature

I recently got hold of a copy of the new V-locity 4 product from Condusiv which was released last month. Condusiv is the new name for Diskeeper, whom you may have heard of before. I first came across them as a provider of software which specialized in optimizing I/O, primarily by preventing file fragmentation on NTFS in a Windows Guest OS. I blogged about them in the past on the vSphere Storage Blog after some discussions around defragmentation in the Guest OS. The new feature takes a portion of memory and uses it as a block cache. I did some…

Heads Up! Storage DRS scheduler removes rules

One of my friends over in VMware support just gave me a heads-up on this issue. It affects virtual machines with anti-affinity VMDK rules defined (for keeping virtual machine disks on different datastores in a datastore cluster) when changing the Storage DRS automation level via scheduled tasks. The rule will be removed if you use the Storage DRS scheduler to change Storage DRS automation level from automatic to manual and then back to automatic. The result is that a VM which had an anti-affinity rule and has its VMDKs on different datastores could end up with its VMDKs on the…

Auto LUN Discovery on ESXi hosts

Did you know that any newly presented LUNs/paths added to an already discovered target will automatically be discovered by your ESXi host without a rescan of the SAN? In this example, I currently see two iSCSI LUNs from my NetApp array: Let’s see what happens when I add new devices to my ESXi host from a new target.

VOMA – Found X actively heartbeating hosts on device

One of the long-awaited features introduced with vSphere 5.1 was VOMA (vSphere On-disk Metadata Analyzer). This is essentially a filesystem checker for both the VMFS metadata and the LVM (Logical Volume Manager). Now, if you have an outage either at the host or storage side, you have a mechanism to verify the integrity of your filesystems once everything comes back up. This gives you peace of mind when wondering if everything is ok after the outage. There is a requirement however to have the VMFS volume quiesced when running the VOMA utility. This post will look at some possible reasons…

Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?