I have been doing quite a bit of work on First Class Disks (FCD), also known as Improved Virtual Disks (IVD) over the past number of months. One tool that has been extremely useful in improving my understanding of FCDs has been safekeeping, a tool developed by Max Daneri of VMware and which is now available to download on GitHub. If you did not know, FCDs are used extensively in VMware’s new Cloud Native Storage (CNS) offering that is currently available with vSphere/vSAN 6.7U3. Now, whilst the primary aim of this tool is to help backup vendors become familiar with…
I’ve been looking at ways in which we could query the mappings of objects between the Kubernetes layer and the vSphere layer. One thing that I really wanted to figure out is if I have the VolumeHandle from the Persistent Volume in Kubernetes, could I easily find the datastore and path using PowerCLI. It looks like I can. Let’s begin with a look at the Persistent Volume or PV for short. Note that this is a Kubernetes cluster that is using the new vSphere CSI driver.
In my most recent 101 post on ReadWriteMany volumes, I shared an example whereby we created an NFS server in a Pod which automatically exported a File Share. We then mounted the File Share to multiple NFS client Pods deployed in the same namespace. We saw how multiple Pods were able to write to the same ReadWriteMany volume, which was the purpose of the exercise. I received a few questions on the back on that post relating to the use of Services. In particular, could an external NFS client, even one outside of the K8s cluster, access a volume from…
Over the last number of posts, we have spent a lot of time looking at persistent volumes (PVs) instantiated on some vSphere back-end block storage. These PVs were always ReadWriteOnce, meaning they could only be accessed by a single Pod at any one time. In this post, we will take a look at how to create a ReadWriteMany volume, based on an NFS share, which can be accessed by multiple Pods. To begin, we will use an NFS server image running in a Pod, and show how to mount the exported file share to another Pod, simply to get the…
We have looked at quite a few scenarios when Kubernetes is running on vSphere, and what that means for storage. We looked at PVs, PVC, PODs, Storage Classes, Deployments and ReplicaSets, and most recently we looked at StatefulSets. In a few of the posts we looked at some controlled failures, for example, when we deleted a Pod from a Deployment or from a StatefulSet. In this post, I wanted to look a bit closer at an uncontrolled failure, say when a node crashes. However, before getting into this in too much details, it is worth highlighting a few of the…
In my last post we looked at creating a highly available application that used multiple Pods in Kubernetes with Deployments and ReplicaSets. However, this was only focused on Pods. In this post, we will look at another way of creating highly available applications through the use of StatefulSets. The first question you will probably have is what is the difference between a Deployment (with ReplicaSets) and a StatefulSet. From a high level perspective, conceptually we can consider that the major difference is that a Deployment is involved in maintaining the desired number of Pods available for an application, whereas a…
In my previous 101 posts on Kubernetes Storage on vSphere, we saw how to create “static” persistent volumes (PVs) by mapping an existing virtual machine disk (VMDK) directly into a persistent volume (PV) manifest YAML file. We also saw that we could dynamically instantiate PVs through the use of a StorageClass. We saw how a StorageClass can also be used to apply features of the underlying vSphere storage, such as a storage policy, to a PV and how Pods can consume both static or dynamic PVs through the use of persistent volume claims (PVCs). However in both previous exercises, we…