Task “Delete a virtual storage object” reports “A specified parameter was not correct”

I’ve recently been looking at the vSphere Velero Plugin, and how the latest version of the plugin enables administrators to backup and restore vSphere with Tanzu Supervisor cluster objects as well as Tanzu Kubernetes “guest” cluster objects. This plugin utilizes vSphere snapshot technology, so that a Kubernetes Persistent Volume (PV) backed by a First Class Disk (FCD) in vSphere can be snapshot, and the snapshot is then moved by a Data Manager appliance to an S3 object store bucket. Once the data movement operation has completed, the snapshot is removed from the PV/FCD. During the testing of this new functionality,…

Tanzu Kubernetes with embedded Harbor Image Registry (revisited)

Just recently I had reason to have my TKG (Tanzu Kubernetes) guest cluster pull images from the embedded Harbor container image registry which is available as part of vSphere with Tanzu. Now, I did this in the past but there were quite a few hoops that you needed to jump through in order to make this work. I wrote about how I did it here. So I was pleased to see that the following update was included in the vSphere with Tanzu Release Notes that coincided with vSphere 7.0U1c last December: Integration with Registry Service – Newly created Tanzu Kubernetes clusters…

Velero vSphere Operator backup/restore TKG “guest” cluster objects in vSphere with Tanzu

Over the past week or so, I have posted a number of blogs on how to get started with the new Velero vSphere Operator. I showed how to deploy the Operator in the Supervisor Cluster of vSphere with Tanzu, and also how to install the Velero and Backupdriver components in the Supervisor. We then went on to take backups and do restores of both stateless (e.g. Nginx deployment) and stateful (e.g. Cassandra StatefulSet) which were running as PodVMs is a Supervisor cluster. In the latter post, we saw how the new Velero Data Manager acted as the interface between Velero,…

TKG & vSAN File Service for RWX (Read-Write-Many) Volumes

A common question I get in relation to VMware Tanzu Kubernetes Grid  (TKG) is whether or not it supports vSAN File Service, and specifically the read-write-many (RWX) feature for container volumes. To address this question, we need to make a distinction into how TKG is being provisioned. There is the multi-cloud version of TKG, which can run on vSphere, AWS or Azure, and are deployed from a TKG manager. Then there is the embedded TKG edition where ‘workload clusters’ are deployed in Namespaces via vSphere with Tanzu / VCF with Tanzu. To answer the question about whether or not TKG…

Deploying TKG v1.2.0 (TKGm) in an internet-restricted environment using Harbor

In this post, I am going to outline the steps involved to successfully deploy a Tanzu Kubernetes Grid  (TKG) management cluster and workload clusters in an internet restricted environment. [Note: since first writing this article, we appear to have standardized on TGKm – TKG multi-cloud – for this product. This is often referred to as an air-gapped environment. Note that for part of this exercise, a virtual machine will need to be connected to the internet in order to pull down the images requires for TKG. Once these have been downloaded and pushed up to our local Harbor container image…

Creating developer users and namespaces (scripted) in TKG “Guest” Clusters

I’ve spent a lot of time recently on creating and building out vSphere with Tanzu environment, with the goal of deploying a Tanzu Kubernetes “guest” cluster. I frequently used the kubectl-vsphere command to logout of the Supervisor namespace context and login to the Guest cluster context. This allowed me to start deploying stateful and stateful apps in my Tanzu Kubernetes Guest cluster. I thought no more about this step until a recent conversation with my colleague Frank Denneman. He queried whether or not Kubernetes developers would actually have vSphere privileges to do this. It was a great question which led…

Persistent Volume Placement in HCI-Mesh deployments

One of the new features introduced in vSphere 7.0U1 is HCI-Mesh, the ability to remotely mount vSAN datastores between vSAN clusters managed by the same vCenter Server. My buddy and colleague Duncan has done a great write-up on this topic on his yellow-bricks blog. In this post, I am going to look at how to address the situation of selecting the correct vSAN datastore when provisioning Kubernetes Persistent Volumes in an environment which uses HCI-Mesh. Let’s start with why this situation needs additional consideration. Let’s assume that there is a vSphere cluster that have vSAN enabled, and thus this cluster…