Site icon CormacHogan.com

Native File Services for vSAN 7

On March 10th 2020, we saw a plethora of VMware announcements around vSphere 7.0, vSAN 7.0, VMware Cloud Foundation 4.0 and of course the Tanzu portfolio. The majority of these announcements tie in very deeply with the overall VMware company vision which is any application on any cloud on any device. Those applications have traditionally been virtualized applications. Now we are turning our attention to newer, modern applications which are predominantly container based, and predominantly run on Kubernetes. Our aim is to build a platform which can build, run, manage, connect and protect both traditional virtualized applications and modern containerized applications.

And so we come to vSAN File Services. vSAN File Services, fully integrated into vSAN 7, now has the ability to create file shares alongside block volumes on your vSAN datastore. These file shares are accessible via NFS v3 and NFS v4.1 protocols. It is very simple to setup and use, as you would expect with vSAN. It is also fully integrated with vSAN Virtual Object Viewer and Health, with its own set of checks for both File Server Agents and the actual File Shares themselves. So let’s take a look at vSAN File Service in action.

Disclaimer: “To be clear, this post is based on a pre-GA version of the vSAN Native File Services product. While the assumption is that not much if anything will change between now and when the product becomes generally available, I want readers to be aware that features and UI may still change before that.”

Simple Deployment Mechanism

Let’s begin with a look at how to deploy vSAN File Services. It really is very straight-forward, and is enabled in much the same way as previous vSAN services. We’ll begin with a 3-node vSAN 7 cluster, as shown below.

Next, in the vSphere Client, navigate to the vSAN Services section where you will now see vSAN File Services. It is currently Disabled – let’s Enable it.

This launches the vSAN File Service wizard. The introduction is interesting as it is showing File Shares being consumed by both VMs and Containers. Like I said, we are building a platform for both virtualized and modern applications. In this post, we will concentrate on virtual machines, but we will look at containers consuming vSAN File Service in a future post.

How File Services is implemented on vSAN is that we implement a set of File Server Agents, managed by vSphere ESX Agent Manager.. These are very lightweight, opinionated virtual appliances running Photon OS and Docker. They behave as NFS servers and provide access to the file shares. When File Services is Generally Available, you will be able to download the agent image directly from the internet (My VMware I assume). Alternatively, for sites that do not have internet access, you can download the File Service Agent OVF offline.

Now an interesting item to highlight in the Introduction screen is the Distributed File System sitting between the NFS File Shares and vSAN. This Distributed File System component is how we share the configuration state (file share names, file share redirects, etc) across all of the File Service Agents. If any of the File Service Agents were to fail on one of the ESXi hosts in the vSAN cluster, this enables the agent to be restarted on any other host in the vSAN cluster and continue to have access to all of the metadata around its file shares, etc.

The next step is to provide some information around the domain, such as security mode and DNS. I have called the file service domain “vsan-fs” in this example. At present, in the vSAN 7 release, only AUTH_SYS is supported as the NFS authentication method. This means that the agents enforce file system permissions for users of the NFS file share using Unix-like permissions, the same as it uses for local users.

Now we get to the Networking section, where you select the VM network on which the File Server Agents are deployed, and of course how the file shares are accessed. Usual stuff like Subnet Mask and Gateway are also added at this point.

And the final step is to provide an IP Pool for the File Service Agents. What is nice about this is that you can auto-fill the IP addresses if you have a range of IPs available for the agents. There is also an option to lookup the DNS names of the File Service Agents rather than type them in manually. You can then select which IP address is the primary and this is the address that will be used for accessing all your NFS v4.1 shares. At the back-end, the connection to the share could be redirected to a different File Server Agent using NFSv4 referral. NFSv4 referral is a NFSv4 specific mechanism that allows one File Server Agent to redirect a client connection/request to another File Server Agent as it crosses a directory (hard-coded as /vsanfs/ in Native vSAN File Services). We will see this behaviour in action shortly.

Review your selection and click Finish to start the deployment.

This will lead to a number of tasks being initiated in your vSphere environment, which I’ve captured here:

However, when the tasks complete, you will have a new vSAN File Service Node/Agent per vSAN node (3 agents for my 3 node vSAN cluster).

And if we take a look at the vSAN File Service in vSAN Services, we now see it enabled with various additional information, most of which we provided during the setup process:

One other feature to point out here is the ‘Check Upgrade’ option. Should a new release of the File Service Agent become available, you can use this feature to provide a rolling upgrade of the agents whilst maintaining full availability of all the file shares. And just like we saw with the initial deployment, if your vCenter has internet connectivity, you can pull down the new versions automatically. If your vCenter does not have internet connectivity, you can download the new OVF offline and upload it manually.

Now that we have enabled vSAN File Services, let’s go ahead and create our first file share.

Simple File Share Creation Mechanism

A new menu item now appears in Configure  > vSAN called File Service Shares. You can see the domain (vsan-fs) that we created in the setup, as well as the supported protocols (NFS v3 and 4.1) as well as the Primary IP. One of the File Service Agents was designated as the primary during setup, and this is the IP address used for mounting all NFS v4.1 shares. The connections are redirected internally to the actual IP address of the Agent presenting the share using NFS referral. Click on ADD to create the first share.

In this wizard, we provide a name for the share. We also pick a vSAN storage policy for the share. The policy can include all of the features that we associated with block storage policies for performance and availability. We also have the options of setting a warning threshold and a hard quota. A hard quota means that we will not be able to place any additional data on the share once the quota is reached, even though there may still be free space available on the share. Finally, you can also add labels on the file share. Labels are heavily utilized in the world of Kubernetes, and so it can be useful to mark a share as being used for a particular application. We will discuss how Kubernetes can consume these file shares in another post.

The only other step is to specify which networks can access the file share. You can make the share wide open – accessible from any network, or you can specify which particular networks can access the file share and what permissions they can have e.g. Read Only or Read Write. The Root squash checkbox is a security technique used heavily in NFS which ‘squashes’ the permissions of any root user who mounts and uses the file share.

Review the file share and click Finish to create it.

When the file share is created, it will appear in the File Service Shares as shown below.

Again, very straight-forward and the file shares are created with just a few clicks in the UI. And of course, all shares are deployed on the vSAN datastore, and are configured with both availability and performance through Storage Policy Based Management (SPBM).

I also want to highlight that if you have a need to change the soft or hard quota of a file share, add some labels, or change the network access, simply select the file share, click Edit, and make your changes on the fly. Pretty simple once again.

vSAN Object View and Health Integration

Since the file share is instantiated on vSAN, you can simply click on the name of the file share from the file shares view and see the file share details in the Virtual Objects view as shown below.

You can also click on the “View placement Details” link below to see the layout of the underlying vSAN object to see which hosts and which physical storage devices are used for placing the components of the file share object. And of course, the Heath Service has been extended to include a number of additional health checks specific to the vSAN File Service.

Now that the File Share has been built, lets consume it from an external Linux (Ubuntu 18.04) VM.

Consuming File Shares (NFS v3 and NFS v4.1)

vSAN File Shares can be mounted as either NFS v4.1 or NFS v3. The main point to get across here is how to use the NFS v4.1 primary IP mechanism with NFSv4 referral. If I use the standard showmount command, we do not see the root share folder that is needed for NFSv4 referral (hard-coded to /vsanfs/):

# showmount -e 10.27.51.194
Export list for 10.27.51.194:


# showmount -e 10.27.51.195
Export list for 10.27.51.195:
/first-share 10.27.51.0/24


# showmount -e 10.27.51.196
Export list for 10.27.51.196:

As we can see the file share /first-share is on my second File Service Agent with the IP Address 195. To mount this file share as NFS v4.1 via the Primary IP and the NFSv4 referral mechanism, I need to include the root share (/vsanfs) in the mount path even though it is not shown in the showmount output. This will then refer the client request to mount the file share to the appropriate File Service Agent.

Below is is the command to mount the file share using NFS v4.1. NFS v4.1 is also the default mount version if no protocol if specified. In that case, the client will negotiate the mount protocol with the server and mount with the highest matching protocol, which for vSAN 7 Native File Services is NFS v4.1:

# mount 10.27.51.194:/vsanfs/first-share /newhome/

or

# mount -t nfs -o vers=4.1 10.27.51.194:/vsanfs/first-share /newhome/


# mount | grep newhome
10.27.51.194:/first-share on /newhome type nfs4 (rw,relatime,vers=4.1\
,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2\
,sec=sys,clientaddr=10.27.51.18,local_lock=none,addr=10.27.51.194)

If you want to mount the share using NFSv3, the root share (/vsanfs) is not used in the mount path. In this case, you mount directly from the ‘owning’ File Share Agent which you can determine from the showmount command. There is no referral/redirect mechanism with this protocol. Here is an example of mounting directly from one of the File Share Agents (196, the one which owns the file share) over NFS v3.

# mount -t nfs -o vers=3 10.27.51.196:/second-share /newhome/

# mount | grep newhome
10.27.51.196:/second-share on /newhome type nfs (rw,relatime,vers=3\
,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2\
,sec=sys,mountaddr=10.27.51.196,mountvers=3,mountport=20048,mountproto=udp,\
local_lock=none,addr=10.27.51.196)

And don’t worry if you can’t remember the correct syntax. In the vSphere UI, simply select the file share. Note that a few additional items now appear in the tool bar, such as ‘Copy URL‘:

Next, click Copy Url and decide if you want to mount the share using NFSv3 or NFSv4.

First, lets choose NFSv4. Now you are shown the connection string that should be used to mount this file share using the referral /vsanfs/ root directory:

Or if I decided that I wanted to mount as NFSv3, this is the connection string that should be used:

This time there is no referral directory included; you need to mount the share directly from the ‘owning’ agent.

Quota Alarms and Events

As we saw in the file share creation, we have the ability to set a share warning threshold. If we exceed this threshold, we see the ‘Usage over quota’ field turn orange in the UI.

There is also a share hard quota. Once you reach that you won’t be able to write any more data to the share. It will fail with a Disk quota exceeded error as show below.

If the hard quota is exceed, you’ll also get the following alarm displayed in the vSphere UI:

Conclusion

We’ve been working on a Native File Service solution for vSAN for a very long time. Its great to finally see it make it to release. I think the engineering team have done an excellent job with (a) keeping the simplicity approach that defines vSAN and also (b) the integration with other key vSAN features like the health check and virtual objects view.

We have also seen how easy it is for a virtual machine to consume a file share, and how useful this can be for traditional virtualized applications. But what about containers? Can they consume these file shares just as easily? Watch this space and I’ll share with you how we are planning to extend our CSI driver and CNS (Cloud Native Storage) implementation to consume vSAN File Service, allowing both traditional virtualized applications and newer, modern, containerized applications to consume this service – any app, and cloud, any device.

To learn more about vSAN 7 features, check out the complete vSAN 7 Announcement here. To learn more about VMware Cloud Foundation 4, of which vSAN 7 is an integral part, check out the complete VCF 4 Announcement here.

Exit mobile version