Getting started with the TKGm (multi-cloud) Command Line (Videos)

In this post, I have two short videos demonstrating how to (1) deploy the Tanzu Kubernetes Grid multi-cloud (TKGm) management cluster using the “tkg” command line tool, and then once the TKG management cluster has been deployed, I show how to (2) very simply deploy a subsequent TKG workload cluster using the same “tkg” command. Note that at I have updated this post to use the TKGm acronym, as this is now how we are marketing this particular product. Previously, the term standalone was used. If you wish to know more detail, check out my full post on how to…

Tanzu Kubernetes Grid multi-cloud (TKGm) from the tkg Command Line Interface

After spending quite a bit of time looking at vSphere with Kubernetes, and how one could deploy a Tanzu Kubernetes Grid (TKG) “guest” cluster in a namespace with a simple manifest file, I thought it was time to look at other ways in which customers could deploy TKG clusters on top of vSphere infrastructure. In other words, deploy TKG without vSphere with Kubernetes, or VMware Cloud Foundation (VCF) for that matter.  This post will look at TKG multi-cloud (TKGm) version 1.1.2 and in particular the tkg command line tool to first deploy a TKG management cluster, and once that is…

Gestalt IT Podcast – Orchestration is the reason enterprises haven’t adopted containers.

I was recently asked to participate in the Gestalt IT podcast. The format was a little different to what I am used to. In the podcast, Stephen Foskett suggests a premise and the participants are asked to share their opinions on it. Essentially, pick a side. Do you agree or disagree with the premise? In this podcast, the premise was Orchestration is the reason enterprises haven’t adopted containers. During the conversation, I had the opportunity to talk about a number of initiatives that are on-going at VMware related to Kubernetes. Have a listen and let me know what you think.

Integrating embedded vSphere with Kubernetes Harbor Registry with TKG (guest) clusters

A number of readers have hit me up with queries around how they can use the integrated Harbor image repository (that comes integrated with vSphere with Kubernetes) for applications that are deployed on their Tanzu Kubernetes Grid clusters, sometimes referred to as guest clusters. Unfortunately, there is no defined workflow on how to achieve this. The reason for this is that there are a number of additional life-cycle management considerations that we need to take into account before we can fully integrate these components. This includes adding new TKG nodes to the image registry as a TKG cluster is scaled.…

vSAN File Services and Cloud Native Storage integration (Video)

In this short video, I want to show some of the integration points between vSAN 7.0 File Services, and Cloud Native Storage (CNS). We will use the CSI driver that ships with vSphere 7.0 to provision a new read-write-many persistent volume backed by a vSAN file share. A read-write-many persistent volume is one that can be accessed by multiple Kubernetes Pods simultaneously. I will then show how CNS provides the vSphere client all sorts of useful information about the volume. This information is invaluable to a vSphere Admin when trying to figure out how vSphere storage is being consumed when…

Static Persistent Volumes and Cloud Native Storage

Recently I was asked if “statically” provisioned persistent volumes (PVs) in native, vanilla, Kubernetes would be handled by Cloud Native Storage (CNS) in vSphere 7.0 and in turn appear in the vSphere client, just like a dynamically provisioned persistent volume. The short answer is yes, this is supported and works. The details on how to do this are shown here in this post. I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are…

Read-Only Persistent Volumes on vSAN File Services

I’m writing this post because of a misconception I had regarding how read-only volumes were configured in Kubernetes. I thought this was controlled by the accessModes parameter in the PersistentVolumeClaim manifest file. This is not the case. It is controlled from the Pod, which to me seems a bit strange. Why would this not be controlled from the PVC manifest? One of our engineers pointed me to a few Kubernetes discussions on the behaviour of accessModes and readOnly here and here. It would seem that I am not the only one confused by this behaviour. In this post, I deploy…