A few weeks back, just after the vSphere 7.0 launch event, I wrote an article about Native File Services in vSAN 7.0. I had a few questions asking why we decided on NFS support in this initial release, and not something like SMB or some other protocol. The reason is quite straight-forward. We are positioning vSAN as a platform for both traditional virtual machine workloads and newer containerized workloads. We chose NFS to address a storage requirement in Kubernetes, namely a way to share Persistent Volumes between Pods. To date, the vSphere CSI driver only provisioned block based Persistent Volumes…
On March 10th 2020, we saw a plethora of VMware announcements around vSphere 7.0, vSAN 7.0, VMware Cloud Foundation 4.0 and of course the Tanzu portfolio. The majority of these announcements tie in very deeply with the overall VMware company vision which is any application on any cloud on any device. Those applications have traditionally been virtualized applications. Now we are turning our attention to newer, modern applications which are predominantly container based, and predominantly run on Kubernetes. Our aim is to build a platform which can build, run, manage, connect and protect both traditional virtualized applications and modern containerized…
After a very eventful VMworld, we received lots of questions about CNS, the Cloud Native Storage feature that was released with vSphere 6.7U3. Whilst most of the demonstrations and blog articles around CNS focused on vSAN, what may have been missed is that this feature also works with both VMFS and NFS datastores. For that reason, I decided to create some examples of how CNS can also bubble up information in vSphere about Kubernetes Persistent Volumes (PVs) created on both VMFS and NFS datastores. Let’s begin by creating some simple policies to tag my VMFS datastore and my NFS datastore.…
In my most recent 101 post on ReadWriteMany volumes, I shared an example whereby we created an NFS server in a Pod which automatically exported a File Share. We then mounted the File Share to multiple NFS client Pods deployed in the same namespace. We saw how multiple Pods were able to write to the same ReadWriteMany volume, which was the purpose of the exercise. I received a few questions on the back on that post relating to the use of Services. In particular, could an external NFS client, even one outside of the K8s cluster, access a volume from…
Over the last number of posts, we have spent a lot of time looking at persistent volumes (PVs) instantiated on some vSphere back-end block storage. These PVs were always ReadWriteOnce, meaning they could only be accessed by a single Pod at any one time. In this post, we will take a look at how to create a ReadWriteMany volume, based on an NFS share, which can be accessed by multiple Pods. To begin, we will use an NFS server image running in a Pod, and show how to mount the exported file share to another Pod, simply to get the…
Some time back, nearly 6 years ago in fact, I wrote about how you might hit the NFS maximum value for the number of connections you can have per IP address when mounting a lot of shares from the same NFS target. You can find the article in question here. The question came up again recently, and I found that a few things have changed since I wrote that post. In this updated post, thanks to some feedback from our NFS engineers, I wanted to revisit this scenario and explain in some further detail what the limits are. First of…
Over the past few weeks, I’ve been looking to update some of our older white papers on core storage topics. One of the outdated papers was on NFS, and a lot had changed in this space since the paper was last updated. Most notably, was the introduction of support for NFS v41 in vSphere 6.0, along with Kerberos based authentication. In vSphere 6.5, we also added Kerberos integrity checking. I decided to have a go at configuring this in my own lab. Before going any further, I need to thank Justin Parisi of NetApp for this guidance through this setup.…