vSOM and VDPA Releases

In case you missed it, VMware released two interesting additions to the product portfolio last week. vSOM – vSphere with Operations Manager It became apparent that vCenter’s UI, while adequate for a lot of activities such as VM provisioning, does not provide the level of detail or forecasting required to manage large and complex infrastructures on a day to day basis. This is why VMware has introduced a new SKU which bundles the vCenter Operations product with vSphere. vCenter Operations Manager (vCOps) is the most comprehensive product for monitoring the health and performance of your virtual infrastructure. It  also monitors…

Does Software iSCSI on vSphere support IPsec?

This came up in a conversation today. Does VMware’s Software iSCSI implementation support Internet Protocol Security (IPsec) in vSphere 5.1? Internet Protocol Security (IPsec) secures IP communications coming from and arriving at an ESXi host. Although KB article 1021769 states that IPv6 is compatible with Software iSCSI, it doesn’t state whether or not IPsec is supported with Software iSCSI. To find this information, you have to reach for the vSphere Security Guide. Under the section ‘Securing iSCSI Devices Through Authentication’, it states: ESXi does not support Kerberos, Secure Remote Protocol (SRP), or public-key authentication methods for iSCSI. Additionally, it does not…

Heads Up! Storage DRS scheduler removes rules

One of my friends over in VMware support just gave me a heads-up on this issue. It affects virtual machines with anti-affinity VMDK rules defined (for keeping virtual machine disks on different datastores in a datastore cluster) when changing the Storage DRS automation level via scheduled tasks. The rule will be removed if you use the Storage DRS scheduler to change Storage DRS automation level from automatic to manual and then back to automatic. The result is that a VM which had an anti-affinity rule and has its VMDKs on different datastores could end up with its VMDKs on the…

Auto LUN Discovery on ESXi hosts

Did you know that any newly presented LUNs/paths added to an already discovered target will automatically be discovered by your ESXi host without a rescan of the SAN? In this example, I currently see two iSCSI LUNs from my NetApp array: Let’s see what happens when I add new devices to my ESXi host from a new target.

VOMA – Found X actively heartbeating hosts on device

One of the long-awaited features introduced with vSphere 5.1 was VOMA (vSphere On-disk Metadata Analyzer). This is essentially a filesystem checker for both the VMFS metadata and the LVM (Logical Volume Manager). Now, if you have an outage either at the host or storage side, you have a mechanism to verify the integrity of your filesystems once everything comes back up. This gives you peace of mind when wondering if everything is ok after the outage. There is a requirement however to have the VMFS volume quiesced when running the VOMA utility. This post will look at some possible reasons…

Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?