Deploy Harbor embedded Image Registry on vSphere with Kubernetes (Video)

This short video will demonstrate how to deploy the embedded Harbor Image Registry in vSphere with Kubernetes. It will highlight the different PodVMs used for Harbor, as well as the Persistent Volumes required by some of the PodVMs. The demo will look at the integration between namespaces created in vSphere with Kubernetes and the Harbor projects. I will also show how to download the CA certificate to a client to enable remote access to Harbor. Finally, I will show how to tag and push some images up to the image registry.

Read-Only Persistent Volumes on vSAN File Services

I’m writing this post because of a misconception I had regarding how read-only volumes were configured in Kubernetes. I thought this was controlled by the accessModes parameter in the PersistentVolumeClaim manifest file. This is not the case. It is controlled from the Pod, which to me seems a bit strange. Why would this not be controlled from the PVC manifest? One of our engineers pointed me to a few Kubernetes discussions on the behaviour of accessModes and readOnly here and here. It would seem that I am not the only one confused by this behaviour. In this post, I deploy…

Using Velero to backup and restore applications that use vSAN File Service RWX file shares

It has been a while since I looked at Velero, our backup and restore product for Kubernetes cluster resources. This morning I noticed that the Velero team just published version 1.4. This article uses the previous version of Velero, version is v1.3.2. The version should not make a difference to the article. In this post, I want to see Velero backing up and restoring applications that use read-write-many (RWX) volumes that are dynamically provisioned as file shares from vSAN 7.0 File Services. To demonstrate, I’ll create two simple busybox Pods in their own namespace. Using the vSphere CSI driver, Kubernetes…

vSphere 7.0, Cloud Native Storage, CSI and offline volume extend

Another new feature added to the vSphere CSI driver in the vSphere 7.0 release is the ability to offline extend / grow a Kubernetes Persistent Volume (PV). This requires a special directive to be added to the StorageClass and, as per the title, the operation must be done offline whilst the PV is detached from any Pod. Let’s take a closer look at the steps involved. New CSI component – CSI Resizer To enable resizing operations, a new component has been added to the vSphere CSI Controller called csi-resizer. We can examine the csi-resizer and other components associated with the…

vSphere 7.0, Cloud Native Storage, CSI and encryption support

A common request we’ve had for the vSphere CSI (Container Storage Interface) driver is to support encryption of Kubernetes Persistent Volumes using the vSphere feature called VMcrypt. Although we’ve had VM encryption since vSphere 6.5, this was a feature that we could not support in the first version of the CSI driver that we shipped with vSphere 6.7U3. However, I’m pleased to announce that we can now support this feature with the new CSI driver shipping with vSphere 7.0. The reason we can support it in vSphere 7.0 is that First Class Disks, also known as Improved Virtual Disks, now…

vSphere 7.0, Cloud Native Storage, CSI and vVols support

With the release of vSphere 7.0, we also announced enhancements to our Cloud Native Storage (CNS) offering. One of the new features that we now offer in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via our updated version of the vSphere Container Storage Interface (CSI) driver. In this post, I will walk through the steps involved in consuming vVols via Kubernetes manifest files when dynamically provisioning PVs. I will also show some enhancements to our CNS UI in vSphere 7.0 so that you can easily identify vVol backed PVs. Step 1…

Building a TKG Cluster in vSphere with Kubernetes

Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. [Update] Whilst guest cluster isn’t an official name for the Tanzu Kubernetes cluster, I’ll use it in this post to differentiate it from the Supervisor cluster deployed with vSphere with Kubernetes. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may…