I just received notification about KB article 2016122 which VMware has just published. It deals with a topic that I’ve seen discussed recently on the community forums. The symptom is that during periods of high I/O, NFS datastores from NetApp arrays become unavailable for a short period of time, before becoming available once again. This seems to be primarily observed when the NFS datastores are presented to ESXi 5.x hosts.
The KB article described a work-around for the issue which is to tune the queue depth size on the ESXi hosts which will reduce I/O congestion to the datastore. By default, the value of NFS.MaxQueueDepth is 4294967295 (which basically means unlimited). The workaround is to change this value to 64. This has been shown to prevent the disconnects. A permanent solution is still being investigated.
I recommend all NetApp customers read this KB article, whether you have been impacted or not.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage
The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?
Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.
Welcome to the next installment of NFS Best Practices. In this fourth and final best practice section on NFS, I asked a number of our major NAS storage partners some sizing questions. The questions basically fell into 3 different categories:
- Do you have a recommended NFS volume size/sweet-spot recommendation?
- Do you have a volume block size recommendation that should be configured on the array?
- Do you have a recommended number of VMs per NFS datastore?
In fact, the responses from the vendors were all pretty similar. Let’s take a look at what they had to say.
Welcome to part 3 of the NFS Best Practices series of posts. While part 1 looked at networking and part 2 looked at configuration options, this next post will look at interoperability with vSphere features. We are primarily interested in features which are in some way related to storage, and NFS storage in particular. While many of my regular readers will be well versed in most of these technologies, I’m hoping there will still be some items of interest. Most of the interoperability features are tried and tested with NFS, but I will try to highlight areas that might be cause for additional consideration.