Tanzu Management Cluster Create 101 (1 of 6) – Launching the UI [Video]

You may have noticed a number of posts on TKG from me recently. I’ve been spending a lot of time these days with TKG (Tanzu Kubernetes Grid), predominantly deploying it onto vSphere. However, I know that this is still unexplored territory for a lot of people so I decided to create a number of very short, bite-sized, 101 videos to help get started. This very first video in the 101 series takes a look at how to launch the UI so you can deploy your first TKG management cluster. We look at the command options, including the –bind option which…

Deploying a monitoring stack (Prometheus and Grafana) on TKG v1.4 with External-DNS

Many customers who have deployed Tanzu Kubernetes would like to monitor activity on the cluster. In TKG v1.4, VMware provides all of the packages one would required to setup a full monitoring stack using Prometheus and Grafana. Prometheus records real-time metrics and Grafana provides charts, graphs, and alerts when connected to a supported data source, such as Prometheus. Prometheus has a dependency on an Ingress, which we will provide through the Contour controller package (which includes an Envoy Ingress). In fact, Prometheus leverages a special kind of Ingress called a HTTPProxy which is provided with Contour. We are also going…

Securing LDAP with TLS certificates using ClusterIssuer in TKG v1.4

Over the last month or so, I have looked at various ways of securing Tanzu Kubernetes Grid (TKG) clusters. One recent post covered the integration of LDAP through Dex and Pinniped so you can control who can access the the non-admin context of your TKG cluster. I’ve also looked at how TKG clusters that do not have direct access to the internet can use a HTTP/HTTPS proxy. Similarly,  I looked at some tips when deploying TKG in an air-gapped environment, pulling all the necessary images from our external image registry and pushing them to a local Harbor registry. In another…

TKG v1.4 LDAP (Active Directory) integration with Pinniped and Dex

LDAP integration with Pinniped and Dex is a topic that I have written about before, particularly with TKG v1.3. However, recently I had reason to deploy TKG v1.4 and noticed some nice new enhancements around LDAP integration that I thought it worthwhile highlighting. One is the fact that you no longer need to have a web browser available in the environment where you are configuring LDAP credentials which was a requirement is the previous version. In this post, I will deploy a TKG v1.4 management cluster on vSphere. This environment uses the NSX ALB to provide IP addresses for both…

Network Policies in Tanzu Mission Control revisited

Earlier this month, I had my first look at network policies in Tanzu Mission Control (TMC). This earlier post looked at a very simple network policy where I used a web server app, and showed how we could control access to it from other pods by using labels. In this post, I wanted to do something that is a bit more detailed. For the purposes of this test, I will use a pod based NFS server, and then control access to it from other pods who wish to mount the NFS file share from the server pod. I have already…

Some useful tips when deploying TKG in an air-gap environment

Recently I have been looking at deploying Tanzu Kubernetes Grid (TKG) in air-gapped or internet restricted environments. Interestingly, we offer different procedures for TKG v1.3 and TKG v1.4. In TKG v1.3, we pull the TKG images one at a time from the external VMware registry, and immediately push them up to an internal Harbor registry. In TKG v1.4, there is a different approach whereby all the images are first downloaded (in tar format) onto a workstation that has internet access. These images are then securely copied to the TKG jumpbox workstation, and from there, they are uploaded to the local…

A first look at Network Policies in Tanzu Mission Control

Some time back, I wrote a blog post about how to use the network policies available with the Antrea CNI (Container Network Interface). In that post we looked at how to create a simple network policy to prevent communication between pods in a Tanzu Kubernetes cluster, based on pod selectors / labels. We stood up a simply web server and a standalone pod, and showed how the pod could access the web server when no network policies were in place. We then proceeded to create a network policy that only allowed pods to communicate to each other if the pod…