CNS-CSI 2.1 with vSphere 7.0U1 – What’s new?

In this post, we will look at what is in the new release of the vSphere CSI driver for Kubernetes, as well as enhancements to Cloud Native Storage (CNS)  that handles CSI request on the vSphere infrastructure. CSI improvements will be available in version 2.1 of the driver, and the CNS components will be part of vSphere 7.0U1. Both are required for the features discussed here. The main objective of this release is two-fold: (a) to add CNS-CSI features to vSphere with Kubernetes so that it has a similar specification to the CNS-CSI features that are available with vanilla Kubernetes,…

Failed to deploy PV to local volume – “No compatible datastore found for storagePolicy”

This is something that I “spun my wheels” on a little bit last week, so I decided I’d write a short article to explain the issue in a bit more detail. This is related to the provisioning of a Persistent Volume on the Supervisor cluster of a vSphere with Kubernetes deployment. I had a local VMFS volume on one of my hosts, so I went ahead and tagged the volume using vSphere Tagging. I then built a tag-based storage policy so that when that policy is selected for provisioning, the objects that get provisioned would be placed on that local,…

Cloud Native Storage (CNS) in vSphere with Kubernetes/Tanzu (Video)

A short video explaining the role of the vSphere CSI (Container Storage Interface) driver and CNS (Cloud Native Storage) in both the vSphere with Kubernetes/Tanzu Supervisor Cluster and in the Tanzu Kubernetes Grid (TKG) Guest Cluster. This video discusses the role of the CSI driver in the Supervisor cluster, and the pvCSI driver (para-virtual CSI driver) in the TKG guest cluster. We also look at how the pvCSI communicates CNS control plane in the vCenter Server via the CSI driver in the Supervisor Cluster to request Persistent Volume operations on behalf of the Guest Cluster.

A closer look at vSphere with Kubernetes Permissions

In many of my recent posts about vSphere with Kubernetes, I use a single user (administrator@vsphere.local) to do all of my work. This allows me to carry out a range of activities without worrying about permissions. This vSphere Single Sign-On (SSO) administrator has “edit” permissions on all of the vK8s namespaces. In this post, I want to look at how to assign some different vSphere SSO users and permissions to different namespaces, and also how these permissions are implemented in the vK8s platform (through the Kubernetes ClusterRole and RoleBinding constructs). Let’s start with a view of what a namespace looks…

vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture

VMware recently announced that availability of VMware Cloud Foundation (VCF) 4.0.1. I was particularly interested in this release as it introduced some enhancements around vSphere with Kubernetes deployments on the VCF Management Domain. We refer to the deployment of an application onto the management domain as a VCF consolidated architecture. Whilst we were able to deploy vSphere with Kubernetes on the management domain in VCF version 4.0, it was not seamlessly integrated. In particular, it was not possible to select the management domain to do the necessary vSphere for Kubernetes validation tests. In VCF 4.0.1, it is now possible to…

Integrating embedded vSphere with Kubernetes Harbor Registry with TKG (guest) clusters

A number of readers have hit me up with queries around how they can use the integrated Harbor image repository (that comes integrated with vSphere with Kubernetes) for applications that are deployed on their Tanzu Kubernetes Grid clusters, sometimes referred to as guest clusters. Unfortunately, there is no defined workflow on how to achieve this. The reason for this is that there are a number of additional life-cycle management considerations that we need to take into account before we can fully integrate these components. This includes adding new TKG nodes to the image registry as a TKG cluster is scaled.…

Deploy Harbor embedded Image Registry on vSphere with Kubernetes (Video)

This short video will demonstrate how to deploy the embedded Harbor Image Registry in vSphere with Kubernetes. It will highlight the different PodVMs used for Harbor, as well as the Persistent Volumes required by some of the PodVMs. The demo will look at the integration between namespaces created in vSphere with Kubernetes and the Harbor projects. I will also show how to download the CA certificate to a client to enable remote access to Harbor. Finally, I will show how to tag and push some images up to the image registry.