Deploying a Tanzu Kubernetes cluster using tanzu CLI in vSphere with Tanzu

Regular readers will have seen a number of articles on this site which use the tanzu command line to create and delete TKGm clusters. TKGm is the nomenclature that I am using to describe multi-cloud TKG clusters (also known as standalone TKG clusters) that can be deployed onto numerous different IaaS, including vSphere. In this post, I want to show you how to use the same tanzu CLI tools to deploy a Tanzu Kubernetes cluster via the TKG service (TKGS) on vSphere with Tanzu. I have always shown that to deploy TKG clusters on vSphere with Tanzu, you login to…

A closer look at the v1alpha2 TanzuKubernetesCluster format in vSphere with Tanzu

Today I wanted to take a closer look at the new Tanzu Kubernetes Cluster YAML format (v1alpha2) which extends the configurability of TKG clusters that are deployed via the TKG Service (TKGS) in vSphere with Tanzu. We will look at this from two viewpoints. The first is to show you the differences when it comes to creating a new TKG cluster, as there are a number of different manifest settings now required with the v1alpha2 format. The second viewpoint is to look at how to upgrade the Tanzu Kubernetes Release (tkr) on an existing cluster which has been upgraded from…

A closer look at the vSphere with Tanzu Namespace Service

Now that vSphere 7.0U3c is available, I thought it might be a good time to revisit some of the vSphere with Tanzu features that have appeared in recent editions. The first of these is the Namespace Service, which enables dev-ops personas to create their own Supervisor Namespaces through the command line via kubectl. We have extended this feature in vSphere 7.0U3c to allow dev-ops to add their own Kubernetes labels and annotations. Let’s take a look at how this works, and how the vSphere Administrator can put guardrails around the amount of vSphere resources this persona can consume when creating…

Deploying a monitoring stack (Prometheus and Grafana) on TKG v1.4 with External-DNS

Many customers who have deployed Tanzu Kubernetes would like to monitor activity on the cluster. In TKG v1.4, VMware provides all of the packages one would required to setup a full monitoring stack using Prometheus and Grafana. Prometheus records real-time metrics and Grafana provides charts, graphs, and alerts when connected to a supported data source, such as Prometheus. Prometheus has a dependency on an Ingress, which we will provide through the Contour controller package (which includes an Envoy Ingress). In fact, Prometheus leverages a special kind of Ingress called a HTTPProxy which is provided with Contour. We are also going…

Securing LDAP with TLS certificates using ClusterIssuer in TKG v1.4

Over the last month or so, I have looked at various ways of securing Tanzu Kubernetes Grid (TKG) clusters. One recent post covered the integration of LDAP through Dex and Pinniped so you can control who can access the the non-admin context of your TKG cluster. I’ve also looked at how TKG clusters that do not have direct access to the internet can use a HTTP/HTTPS proxy. Similarly,  I looked at some tips when deploying TKG in an air-gapped environment, pulling all the necessary images from our external image registry and pushing them to a local Harbor registry. In another…

TKG v1.4 LDAP (Active Directory) integration with Pinniped and Dex

LDAP integration with Pinniped and Dex is a topic that I have written about before, particularly with TKG v1.3. However, recently I had reason to deploy TKG v1.4 and noticed some nice new enhancements around LDAP integration that I thought it worthwhile highlighting. One is the fact that you no longer need to have a web browser available in the environment where you are configuring LDAP credentials which was a requirement is the previous version. In this post, I will deploy a TKG v1.4 management cluster on vSphere. This environment uses the NSX ALB to provide IP addresses for both…

Network Policies in Tanzu Mission Control revisited

Earlier this month, I had my first look at network policies in Tanzu Mission Control (TMC). This earlier post looked at a very simple network policy where I used a web server app, and showed how we could control access to it from other pods by using labels. In this post, I wanted to do something that is a bit more detailed. For the purposes of this test, I will use a pod based NFS server, and then control access to it from other pods who wish to mount the NFS file share from the server pod. I have already…