Network Policies in Tanzu Mission Control revisited

Earlier this month, I had my first look at network policies in Tanzu Mission Control (TMC). This earlier post looked at a very simple network policy where I used a web server app, and showed how we could control access to it from other pods by using labels. In this post, I wanted to do something that is a bit more detailed. For the purposes of this test, I will use a pod based NFS server, and then control access to it from other pods who wish to mount the NFS file share from the server pod. I have already…

Securing application Ingress access on TKG v1.4 with Cert Manager and Contour

In this article, I will walk through the steps involved in securing application Ingress access on TKG v1.4. To achieve this, I will use 2 packages that are available with TKG v1.4, Cert Manager and Contour. We will deploy a sample application kuard – Kubernetes Up and Running demo, and show how we can use these packages to automatically generated certificates to establish trust between our client (browser) and the application (kuard) which will be accessed via an Ingress. For the purposes of this article, I will create my own local Certificate Authority. If you have access to a valid…

Using Tanzu Mission Control Data Protection with on-premises S3 (MinIO)

Today, we will look at another feature of Tanzu Mission Control: Data Protection. In an earlier post, we saw how Tanzu Mission Control, or TMC for short, can be used to manage and create clusters on vSphere that have Identity Management integrated with LDAP/Active Directory. We also saw how TMC managed Tanzu Kubernetes clusters on vSphere utilized the NSX ALB for Load Balancing services in that same post. Now we will deploy an S3 Object Store from MinIO to an on-premises Tanzu Kubernetes cluster. This will then become the “backup target” for TMC Data Protection. TMC Data Protection uses the…

vSphere CSI v2.2 – Online Volume Expansion

The vSphere CSI driver version 2.2 has just released. One of the features I was looking forward to in this release is the inclusion of Online Volume Expansion. While volume expansion was in earlier releases, it was always an offline operation. In other words, you have to detach the volume from the pod, grow it, and then attach it back when the expand operation completed. In this version, there is no need to remove the Pod. In this short post, I’ll show a quick demonstration of how it is done. Requirements Note: This feature requires vSphere 7.0 Update 2 (U2).…

VCP to vSphere CSI Migration in Kubernetes

When VMware first introduced support for Kubernetes, our first storage driver was the VCP, the in-tree vSphere Cloud Provider. Some might remember that this driver was referred to as Project Hatchway back in the day. This in-tree driver allows Kubernetes to consume vSphere storage for persistent volumes. One of draw-backs to the in-tree driver approach was that every storage vendor had to include their own driver in each Kubernetes distribution, which ballooned the core Kubernetes code and made maintenance difficult. Another drawback of this approach was that vendors typically had to wait for a new version of Kubernetes to release…

vSphere with Tanzu stateful application backup/restore using Velero vSphere Operator

Recently I wrote about our new Velero vSphere Operator. This new functionality, launched with VMware Cloud Foundation (VCF) 4.2, enables administrators to backup and restore objects in their vSphere with Tanzu namespaces. In my previous post, I showed how we could use the Velero vSphere Operator to backup and restore a stateless application (the example used was an Nginx deployment) to and from an S3 Object Store bucket. The S3 object store and bucket was provided by the Minio Operator that is also available in VCF 4.2 as part of the vSAN Data Persistent platform (DPp) offering. In this post,…

VMware Fusion 12 – vctl / KinD / MetalLB / Niginx deployment

A number of months back, I wrote an article which looked at how we now provide a Kubernetes in Docker (KinD) service in VMware Fusion 12. In a nutshell, this allows us to very quickly stand up a Kubernetes environment using the Nautilus Container Engine with a very lightweight virtual machine (CRX) based on VMware Photon OS. In this post, I wanted to extend the experience, and demonstrate how we can stand up a simple Nginx deployment. First, we will do a simple deployment.  Then we will extend it to use a Load Balancer service (leveraging MetalLB). This post will…