A number of readers have hit me up with queries around how they can use the integrated Harbor image repository (that comes integrated with vSphere with Kubernetes) for applications that are deployed on their Tanzu Kubernetes Grid clusters, sometimes referred to as guest clusters. Unfortunately, there is no defined workflow on how to achieve this. The reason for this is that there are a number of additional life-cycle management considerations that we need to take into account before we can fully integrate these components. This includes adding new TKG nodes to the image registry as a TKG cluster is scaled.…
In this short video, I want to show some of the integration points between vSAN 7.0 File Services, and Cloud Native Storage (CNS). We will use the CSI driver that ships with vSphere 7.0 to provision a new read-write-many persistent volume backed by a vSAN file share. A read-write-many persistent volume is one that can be accessed by multiple Kubernetes Pods simultaneously. I will then show how CNS provides the vSphere client all sorts of useful information about the volume. This information is invaluable to a vSphere Admin when trying to figure out how vSphere storage is being consumed when…
Recently I was asked if “statically” provisioned persistent volumes (PVs) in native, vanilla, Kubernetes would be handled by Cloud Native Storage (CNS) in vSphere 7.0 and in turn appear in the vSphere client, just like a dynamically provisioned persistent volume. The short answer is yes, this is supported and works. The details on how to do this are shown here in this post. I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are…
I’m writing this post because of a misconception I had regarding how read-only volumes were configured in Kubernetes. I thought this was controlled by the accessModes parameter in the PersistentVolumeClaim manifest file. This is not the case. It is controlled from the Pod, which to me seems a bit strange. Why would this not be controlled from the PVC manifest? One of our engineers pointed me to a few Kubernetes discussions on the behaviour of accessModes and readOnly here and here. It would seem that I am not the only one confused by this behaviour. In this post, I deploy…
I recently published an article around Velero and vSAN File Services, showing how Velero and the restic plugin could be used to backup and restore Kubernetes application that used vSAN File Services. Today, I want to turn my attention to a very cool new plugin that we announced in mid-April, namely the Velero Plugin for vSphere. This open source plugin enables Velero to take a crash-consistent VADP* snapshot backup of a block Persistent Volume on vSphere storage, and store the backup on S3 compatible storage. * VADP is short for VMware vSphere Storage APIs – Data Protection. To utilize the…
It has been a while since I looked at Velero, our backup and restore product for Kubernetes cluster resources. This morning I noticed that the Velero team just published version 1.4. This article uses the previous version of Velero, version is v1.3.2. The version should not make a difference to the article. In this post, I want to see Velero backing up and restoring applications that use read-write-many (RWX) volumes that are dynamically provisioned as file shares from vSAN 7.0 File Services. To demonstrate, I’ll create two simple busybox Pods in their own namespace. Using the vSphere CSI driver, Kubernetes…
Since the release of VMware Cloud Foundation (VCF) 4.0 over 1 month ago, I have been asked one question repeatedly – when can I run vSphere with Kubernetes (formerly known as Project Pacific) on a VCF 4.0 Consolidated Architecture? In other words, when can I deploy vSphere with Kubernetes on the Management Domain rather than building a separate VI Workload Domain to run it. The main reason for this request is because this reduces the number of ESXi hosts required to run vSphere with Kubernetes from 7 down to 4. So I am delighted to announce that we now have…