vSAN File Services and Cloud Native Storage integration (Video)

In this short video, I want to show some of the integration points between vSAN 7.0 File Services, and Cloud Native Storage (CNS). We will use the CSI driver that ships with vSphere 7.0 to provision a new read-write-many persistent volume backed by a vSAN file share. A read-write-many persistent volume is one that can be accessed by multiple Kubernetes Pods simultaneously. I will then show how CNS provides the vSphere client all sorts of useful information about the volume. This information is invaluable to a vSphere Admin when trying to figure out how vSphere storage is being consumed when…

Read-Only Persistent Volumes on vSAN File Services

I’m writing this post because of a misconception I had regarding how read-only volumes were configured in Kubernetes. I thought this was controlled by the accessModes parameter in the PersistentVolumeClaim manifest file. This is not the case. It is controlled from the Pod, which to me seems a bit strange. Why would this not be controlled from the PVC manifest? One of our engineers pointed me to a few Kubernetes discussions on the behaviour of accessModes and readOnly here and here. It would seem that I am not the only one confused by this behaviour. In this post, I deploy…

Using Velero to backup and restore applications that use vSAN File Service RWX file shares

It has been a while since I looked at Velero, our backup and restore product for Kubernetes cluster resources. This morning I noticed that the Velero team just published version 1.4. This article uses the previous version of Velero, version is v1.3.2. The version should not make a difference to the article. In this post, I want to see Velero backing up and restoring applications that use read-write-many (RWX) volumes that are dynamically provisioned as file shares from vSAN 7.0 File Services. To demonstrate, I’ll create two simple busybox Pods in their own namespace. Using the vSphere CSI driver, Kubernetes…

A first look at vSphere with Kubernetes in action

In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…

Read-Write-Many Persistent Volumes with vSAN 7 File Services

A few weeks back, just after the vSphere 7.0 launch event, I wrote an article about Native File Services in vSAN 7.0. I had a few questions asking why we decided on NFS support in this initial release, and not something like SMB or some other protocol. The reason is quite straight-forward. We are positioning vSAN as a platform for both traditional virtual machine workloads and newer containerized workloads. We chose NFS to address a storage requirement in Kubernetes, namely a way to share Persistent Volumes between Pods. To date, the vSphere CSI driver only provisioned block based Persistent Volumes…

Getting started with VMware Cloud Foundation (VCF) 4.0

On March 10th, VMware announced a range of new updated products and features. One of these was VMware Cloud Foundation (VCF) version 4.0. In the following series of blogs, I am going to show you the steps to deploy VCF 4.0. We will begin with the deployment of a Management Domain. Once this is complete, we will commission some additional hosts and build our first workload domain (WLD). After that, we will deploy the latest version of NSX-T Edge Cluster to our Workload Domain. The great news here is that this part has now been automated in VCF 4.0. Finally,…

Track vSAN Memory Consumption in vSAN 7

One of the most common requests in relation to vSAN performance is how much CPU and memory does vSAN actually consume on an ESXi host, i.e. what is the overhead of running vSAN. Through the vSAN Performance Service, we have been able to show both host and vSAN CPU usage for some time. However, up to now, we have only been able to show host memory usage, and not overhead attributed to vSAN. It has also been extremely difficult to determine how much memory vSAN required.  Way back in 2014, with the first vSAN version 5.5 release, I wrote this…