Heads Up! Storage DRS scheduler removes rules

One of my friends over in VMware support just gave me a heads-up on this issue. It affects virtual machines with anti-affinity VMDK rules defined (for keeping virtual machine disks on different datastores in a datastore cluster) when changing the Storage DRS automation level via scheduled tasks. The rule will be removed if you use the Storage DRS scheduler to change Storage DRS automation level from automatic to manual and then back to automatic. The result is that a VM which had an anti-affinity rule and has its VMDKs on different datastores could end up with its VMDKs on the…

Auto LUN Discovery on ESXi hosts

Did you know that any newly presented LUNs/paths added to an already discovered target will automatically be discovered by your ESXi host without a rescan of the SAN? In this example, I currently see two iSCSI LUNs from my NetApp array: Let’s see what happens when I add new devices to my ESXi host from a new target.

VOMA – Found X actively heartbeating hosts on device

One of the long-awaited features introduced with vSphere 5.1 was VOMA (vSphere On-disk Metadata Analyzer). This is essentially a filesystem checker for both the VMFS metadata and the LVM (Logical Volume Manager). Now, if you have an outage either at the host or storage side, you have a mechanism to verify the integrity of your filesystems once everything comes back up. This gives you peace of mind when wondering if everything is ok after the outage. There is a requirement however to have the VMFS volume quiesced when running the VOMA utility. This post will look at some possible reasons…

Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?

vCenter Server 5.1.0b Released

This is a follow-up to my previous post on the 5.0U2. At the same time, VMware also released vCenter 5.1.0b. This post will look at the storage items which were addressed in that update, although the issues that are addressed in the storage space are relatively minor compared to the enhancements made in other areas. Note that this update is for vCenter only – there is no ESXi 5.1 update.

vCenter 5.0U2 and ESXi 5.0U2 Released

Hi all, Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.

NFS Best Practices – Part 4: Sizing Considerations

Welcome to the next instalment of NFS Best Practices. In this fourth and final best practice section on NFS, I asked a number of our major NAS storage partners some sizing questions. The questions basically fell into 3 different categories: Do you have a recommended NFS volume size/sweet-spot recommendation? Do you have a volume block size recommendation that should be configured on the array? Do you have a recommended number of VMs per NFS datastore? In fact, the responses from the vendors were all pretty similar. Let’s take a look at what they had to say. Maximum volume Size/Sweet Spot…