Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?

vCenter 5.0U2 and ESXi 5.0U2 Released

Hi all, Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.

NFS Best Practices – Part 4: Sizing Considerations

Welcome to the next installment of NFS Best Practices. In this fourth and final best practice section on NFS, I asked a number of our major NAS storage partners some sizing questions. The questions basically fell into 3 different categories: Do you have a recommended NFS volume size/sweet-spot recommendation? Do you have a volume block size recommendation that should be configured on the array? Do you have a recommended number of VMs per NFS datastore? In fact, the responses from the vendors were all pretty similar. Let’s take a look at what they had to say.

NFS Best Practices – Part 3: Interoperability Considerations

Welcome to part 3 of the NFS Best Practices series of posts. While part 1 looked at networking and part 2 looked at configuration options, this next post will look at interoperability with vSphere features. We are primarily interested in features which are in some way related to storage, and NFS storage in particular. While many of my regular readers will be well versed in most of these technologies, I’m hoping there will still be some items of interest. Most of the interoperability features are tried and tested with NFS, but I will try to highlight areas that might be…

NFS Best Practices – Part 2: Advanced Settings

Following on from part 1 of the NFS Best Practices, part 2 is going to look at tuning from a vSphere perspective. As mentioned, our objective is to update the NFS Best Practice white paper which is now rather dated. There are quite a number of tunable parameters which are available to you when using NFS datastores. Before we drill into these advanced settings in a bit more detail, it is important to understand that the recommended values for some of these settings may (and probably will) vary from storage array vendor to storage array vendor. My objective is to…

NFS Best Practices – Part 1: Networking

There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously.

VAAI Comparison – Block versus NAS

I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.