vSphere 6.0 Storage Features Part 1: NFS v4.1

Although most of my time is dedicated to Virtual SAN (VSAN) these days, I am still very interested in the core storage features that are part of vSphere. I reached out earlier to a number of core storage product managers and engineers to find out what new and exciting features are included in vSphere 6.0. The first feature is one that I know a lot of customers are waiting on – NFS v4.1. Yes, it’s finally here.

Many readers will know that VMware has only supported NFS v3 for the longest time (I think it was introduced in ESX 3.0, way back in the day). Finally we have support for NFS 4.1.

Caution: do not mix protocols

A word of caution before we get into the details. One should also be aware that an NFS volume should not be mounted as NFS v3 to one ESXi host, and NFS v4.1 to another ESXi host. A best practice would be to configure any NFS/NAS array to only allow one NFS protocol access, either NFS v3 or v4.1, but not both. NFS v3 uses propriety client side co-operative locking. NFS v4.1 uses server-side locking. When creating an NFS datastore, this is clearly called out in the Add storage wizard:

NFSv3 or NFSv4.1Yes – that does say “data corruption” folks, so let’s be careful out there.

Multipathing and Load-balancing

Now onto the improvements. NFS v4.1 introduces better performance and availability through load balancing and multipathing. Note that this is not pNFS (parallel NFS). pNFS support is not in vSphere 6.0.

Setup NFSv4.1 datastoreIn the server(s) field, add a comma separate list of IP addresses for the server if you wish to use load-balancing and multipathing.

Security/Kerberos

Another major enhancement with NFS v4.1 is the security aspect. With this version, Kerberos and thus non-root user authentication are both supported. With version 3, remote files were accessed with root permissions, and servers had to be configured with the no_root_squash option to allow root access to files. This is known as the AUTH_SYS mechanism. While this method is still supported with NFS v4.1, Kerberos is a much more secure mechanism. An NFS user is now defined on each ESXi host using esxcfg-nas -U -v 4.1, and this is the user that is used for remote file access. One should use the same user on all hosts. If two hosts are using different users, you might find that a vMotion task will fail.

There is a requirement on Active Directory for this to work, and each ESXi host should be joined to the AD domain. Kerberos is enabled when the NFS v4.1 datastore is being mounted to the ESXi host.

enable kerberosNote the warning message that each host mounting this datastore needs to be part of an AD domain.

Interoperability

There are some limitations when using NFS v4.1 datastores and other core vSphere 6.0 features however. While NFS v4.1 volumes can be used with features like DRS and HA, it is not supported with Storage DRS, Storage I/O Control, Site Recovery Manager and Virtual Volumes.

[Update – March 20th, 2015] I had a few questions about interop with Fault Tolerance. VMs on NFS v4.1 support FT, as long as it is the new FT mechanism introduced in 6.0. VMs running on NFS v4.1 do not support the old, legacy FT mechanism. In vSphere 6.0, the newer Fault Tolerance mechanism can accommodate symmetric multiprocessor (SMP) virtual machines with up to four vCPUs. Earlier versions of vSphere used a different technology for Fault Tolerance (now known as legacy FT), with different requirements and characteristics (including a limitation of single vCPUs for legacy FT VMs). ​

So lots of nice new features with NFS v4.1 around performance, multipathing, load balancing and security, and we can finally move away from using NFS v3.

[Update] There have been a few questions about whether or not multiple datastores can be presented to ESXi hosts over NFS v4.1. The answer is yes. We certainly support multiple NFS v4.1 datastores per array.

15 Replies to “vSphere 6.0 Storage Features Part 1: NFS v4.1”

  1. So are IPv6 and NFS 4.1 supported at this point? Back in September you called out IPv6 and NFS 3 are not a supported config by VMware, and we have seen a number of PSODs attributed to v6 and NFS. The recent patch in January seems to have solved the PSOD issue, but now hosts are randomly dropping their v6 stack instead of a PSOD and are disconnecting NFS datastores.

    1. This is what I’ve got on IPv6 support – we support NFSv4.1 with AUTH_SYS over IPv6. We do not support NFSv4.1 with Kerberos over IPv6.

      1. Great to hear, if you see the Kerberos folks around at VMware please let them know they need to support more than 3 insecure ciphers for Kerberos. RC4 is subject to impersonation attacks not and DES has been insecure for a while.

        1. True, but AES/DES is a start from AUTH_SYS with root access in v3. Besides, it is perhaps the highest common factor among the v4.1 server vendors they planned to support at the time of design. Since the framework is now in, adding more secure encrypting standards in the future may just be a function of demand by users.

  2. Hello,
    Nice post !
    I’ve try it out and it works like a charm. But if i use several VMKernel Port the esxi only uses 1 to connect to the 2 ip addr of my NAS.
    Is there a way to uses the multipathing in a more efficient way and make my esxi uses the 2 VMKernel Port ?
    (The general idea is to uses multipathing on 2x1Go interface and have a 2Go connection to my NAS)

    Thanks,

    Benoit.

    1. I think the onus is on the array to support multipathing/session trunking. Do you know if your NAS array support it? My understanding is that support for NFS 4.1 does not necessarily mean support for multipathing/session trunking.

      1. Thanks for your answer Cormac,

        Here are my result of my test :

        my esxi has 2 ip addr let’s say 10.10.10.1 & 10.10.10.2
        and my NAS has 2 ip addr aswell : 11.11.11.1 & 11.11.11.2

        When i mount my NAS using NFS 4.1, I can see that 2 sessions are created and binded successfully but the sessions are from 10.10.10.1 to 11.11.11.1 and from 10.10.10.1 to 11.11.11.2…
        So my esxi do not uses the 10.10.10.2 interface to join the NAS… Can i fixe this in order to have 4 sessions in “full mesh” or have 2 sessions “parallel” from 10.1 to 11.1 and one from 10.2 to 11.2 (like SMB3 multichannel) ?

        Thanks !

        PS : since my sessions are binded successfully i assume that my NAS is running NFS 4.1 and session trunking correctly.

Comments are closed.