vSAN File Service backed Persistent Volumes Network Access Controls [Video]

A short video to demonstrate how network access to Kubernetes Persistent Volumes, that are backed by vSAN File Service file shares, can be controlled. This allows an administrator to determine who has read-write access and who has read-only access to a volume, based on the network from which they are accessing the volume. This involves modifying the configuration file of the vSphere CSI driver, as shown in the following demonstration. The root squash parameter can also be controlled using this method. This links to a more detailed step-by-step write-up on how to configure the CSI driver configuration file and control…

CNS – not just for vSAN

After a very eventful VMworld, we received lots of questions about CNS, the Cloud Native Storage feature that was released with vSphere 6.7U3. Whilst most of the demonstrations and blog articles around CNS focused on vSAN, what may have been missed is that this feature also works with both VMFS and NFS datastores. For that reason, I decided to create some examples of how CNS can also bubble up information in vSphere about Kubernetes Persistent Volumes (PVs) created on both VMFS and NFS datastores. Let’s begin by creating some simple policies to tag my VMFS datastore and my NFS datastore.…

Finding VMDK path from PV VolumeHandle

I’ve been looking at ways in which we could query the mappings of objects between the Kubernetes layer and the vSphere layer. One thing that I really wanted to figure out is if I have the VolumeHandle from the Persistent Volume in Kubernetes, could I easily find the datastore and path using PowerCLI. It looks like I can. Let’s begin with a look at the Persistent Volume or PV  for short. Note that this is a Kubernetes cluster that is using the new vSphere CSI driver. 

Kubernetes Storage on vSphere 101 – NFS revisited

In my most recent 101 post on ReadWriteMany volumes, I shared an example whereby we created an NFS server in a Pod which automatically exported a File Share. We then mounted the File Share to multiple NFS client Pods deployed in the same namespace. We saw how multiple Pods were able to write to the same ReadWriteMany volume, which was the purpose of the exercise. I received a few questions on the back on that post relating to the use of Services. In particular, could an external NFS client, even one outside of the K8s cluster, access a volume from…

Kubernetes Storage on vSphere 101 – ReadWriteMany NFS

Over the last number of posts, we have spent a lot of time looking at persistent volumes (PVs) instantiated on some vSphere back-end block storage. These PVs were always ReadWriteOnce, meaning they could only be accessed by a single Pod at any one time.  In this post, we will take a look at how to create a ReadWriteMany volume, based on an NFS share, which can be accessed by multiple Pods. To begin, we will use an NFS server image running in a Pod, and show how to mount the exported file share to another Pod, simply to get the…

Kubernetes Storage on vSphere 101 – Failure Scenarios

We have looked at quite a few scenarios when Kubernetes is running on vSphere, and what that means for storage. We looked at PVs, PVC, PODs, Storage Classes, Deployments and ReplicaSets, and most recently we looked at StatefulSets. In a few of the posts we looked at some controlled failures, for example, when we deleted a Pod from a Deployment or from a StatefulSet. In this post, I wanted to look a bit closer at an uncontrolled failure, say when a node crashes. However, before getting into this in too much details, it is worth highlighting a few of the…

Kubernetes Storage on vSphere 101 – StatefulSet

In my last post we looked at creating a highly available application that used multiple Pods in Kubernetes with Deployments and ReplicaSets. However, this was only focused on Pods.  In this post, we will look at another way of creating highly available applications through the use of StatefulSets. The first question you will probably have is what is the difference between a Deployment (with ReplicaSets) and a StatefulSet. From a high level perspective, conceptually we can consider that the major difference is that a Deployment is involved in maintaining the desired number of Pods available for an application, whereas a…