NFS TCP Connections on vSphere revisited

Some time back, nearly 6 years ago in fact, I wrote about how you might hit the NFS maximum value for the number of connections you can have per IP address when mounting a lot of shares from the same NFS target. You can find the article in question here. The question came up again recently, and I found that a few things have changed since I wrote that post. In this updated post, thanks to some feedback from our NFS engineers, I wanted to revisit this scenario and explain in some further detail what the limits are.

First of all, the major change is that we no longer use 2 separate connections when an NFS datastore is mounted to an ESXi host. In the past, there used to be a connection for control, and another connection for data (I/O). These two connection were part of the same single TCP session used per mount.  Today, there is only 1 connections per mount.  This has been changed for quite some time. One of our engineers confirmed for me there has been only 1 connection per mount since vSphere 5.0.

Now lets talk about the SunRPC.MaxConnPerIP that was mentioned in the article above all that time ago. This advanced setting defines the maximum number of connections per IP address that can be used for mounting NFS datastore to the ESXi host (the IP address could represent an NFS target if you want to put it another way). If there are additional mounts over and above this value, those new mounts start sharing connections with other mounts. The default values for this advanced setting are still the same as before. The default is set to 4, but the maximum for the number of connections per IP address may be set to 128. This setting can be found under Host > Configure > System > Advanced System Settings in the vSphere client:

There is another system wide limit to take into account as well, and this is the maximum number of connections allowed altogether. This is capped at 256. So even with the SunRPC.MaxConnPerIP set to 128, if you did mount more than 128 volumes via the same IP to an ESXi host, any further mounts would have to share a connection. Thus, for a single NFS target IP address, 128 unique connections are possible for 128 volumes. Any further mounts from that IP address will start sharing connections.

Note that there is still a maximum of 256 NFS datastores per host supported, although you can have 256 NFS v3 datastores per host and 256 NFS v4.1 datastores mounted per host. The limit of 256 NFS datastores applies per protocol version, meaning that you can have 256 datastores from each protocol mounted to the same host.

Now let’s take a TCP connections example. Let’s say that we are mounting from a NAS array that has 4 different interfaces with unique IP addresses. Since the maximum number of connections is limited to 256, each IP could be configured to have 64 unique connections if we set SunRPC.MaxConnPerIP to 64. This way there would be not reason to share connections.

With that said, engineering has confirmed that there is no reason for us not to be able to increase this value to 256, to match the number of volumes that can be mounted to an ESXi host from the same IP address. They are going to try to get this value increased in a future release.

The latest paper on Network Attached Storage (NAS) for vSphere environments can be found here on StorageHub. I will look to get this content added to it asap.