Earlier this month, I had my first look at network policies in Tanzu Mission Control (TMC). This earlier post looked at a very simple network policy where I used a web server app, and showed how we could control access to it from other pods by using labels. In this post, I wanted to do something that is a bit more detailed. For the purposes of this test, I will use a pod based NFS server, and then control access to it from other pods who wish to mount the NFS file share from the server pod. I have already…
Recently I have been looking at deploying Tanzu Kubernetes Grid (TKG) in air-gapped or internet restricted environments. Interestingly, we offer different procedures for TKG v1.3 and TKG v1.4. In TKG v1.3, we pull the TKG images one at a time from the external VMware registry, and immediately push them up to an internal Harbor registry. In TKG v1.4, there is a different approach whereby all the images are first downloaded (in tar format) onto a workstation that has internet access. These images are then securely copied to the TKG jumpbox workstation, and from there, they are uploaded to the local…
Some time back, I wrote a blog post about how to use the network policies available with the Antrea CNI (Container Network Interface). In that post we looked at how to create a simple network policy to prevent communication between pods in a Tanzu Kubernetes cluster, based on pod selectors / labels. We stood up a simply web server and a standalone pod, and showed how the pod could access the web server when no network policies were in place. We then proceeded to create a network policy that only allowed pods to communicate to each other if the pod…
In this article, I will walk through the steps involved in securing application Ingress access on TKG v1.4. To achieve this, I will use 2 packages that are available with TKG v1.4, Cert Manager and Contour. We will deploy a sample application kuard – Kubernetes Up and Running demo, and show how we can use these packages to automatically generated certificates to establish trust between our client (browser) and the application (kuard) which will be accessed via an Ingress. For the purposes of this article, I will create my own local Certificate Authority. If you have access to a valid…
In this post, I am going to show how I set up my Tanzu Kubernetes Grid management cluster using a proxy configuration. I suspect this may be something many readers might want to try at some point, for various reasons. I will add a caveat to say that I have done the bare minimum to get this configuration to work, so you will probably want to spend far more time than I did on tweaking and tuning the proxy configuration. At the end of the day, the purpose of this exercise is to show how a TKG bootstrap virtual machine…
Today, we will look at another feature of Tanzu Mission Control: Data Protection. In an earlier post, we saw how Tanzu Mission Control, or TMC for short, can be used to manage and create clusters on vSphere that have Identity Management integrated with LDAP/Active Directory. We also saw how TMC managed Tanzu Kubernetes clusters on vSphere utilized the NSX ALB for Load Balancing services in that same post. Now we will deploy an S3 Object Store from MinIO to an on-premises Tanzu Kubernetes cluster. This will then become the “backup target” for TMC Data Protection. TMC Data Protection uses the…
I’ve recently been looking at some of the features around Tanzu Mission Control. Tanzu Mission Control (or TMC for short) is a VMware SaaS offering for managing and monitoring your Kubernetes Clusters across multiple clouds. My particular interest on this occasion was around the access policy features, especially when the Tanzu Kubernetes Grid (TKG) workload clusters were deployed with LDAP/Active Directory integration via the Pinniped and Dex packages that are available with TKG. In this post, I will rollout my TKG management cluster, followed by a pair of TKG workload clusters. The TKG management cluster will be automatically integrated with…