TKG v1.3 Active Directory Integration with Pinniped and Dex

Tanzu Kubernetes v1.3 introduces OIDC and LDAP identity management with Pinniped and Dex. Pinniped allows you to plug external OpenID Connect (OIDC) or LDAP identity providers (IDP) into Tanzu Kubernetes clusters which in turn allows you to control access to those clusters. Pinniped uses Dex as the endpoint to connect to your upstream LDAP identity provider, e.g. Microsoft Active Directory. If you are using OpenID Connect (OIDC), Dex is not required. It is also my understanding that eventually Pinniped with eventually integrate directly with LDAP as well, removing the need for Dex. But for the moment, both components are required.…

Tanzu Kubernetes considerations with the new VM Class in vSphere with Tanzu

I recently posted about a new feature in vSphere with Tanzu called VM Service which became available with vSphere 7.0U2a. In a nutshell, this new service allows developers to provision not just Tanzu Kubernetes Clusters and PodVMs in their respective namespaces. Now they can also provision native Virtual Machines as well. The VM Service introduces a new feature called VirtualMachineClassBindings to a developer, and has also introduced some new behaviour around an existing feature, VirtualMachineClass. VirtualMachineClass describes the available resource sizing for virtual machines. They describe how much compute and memory to allocate to a VM, and also if the…

TKG v1.3 and the NSX Advanced Load Balancer

In my most recent post, we took a look at how Cluster API is utilized in TKG. Note that this post refers to the Tanzu Kubernetes Grid (TKG) multi-cloud version, sometimes referred to as TKGm. I will use this naming convention to refer to the multi-cloud TKG in this post, so that it is differentiated from other TKG products in the Tanzu portfolio. In this post, we will take a closer look at a new feature in TKG v1.3, namely the fact that it now supports the NSX ALB – Advanced Load Balancer (formerly known as AVI Vantage) – to…

A closer look at Cluster API and TKG v1.3.1

In this post, I am going to take a look at Cluster API, and then take a look at some of the changes made to TKG v.1.3.1. TKG uses Cluster API extensively to create workload Kubernetes clusters, so we will be able to apply what we see from the first part of this post to TKG in the second part. There is already an extensive amount of information and documentation available on Cluster API, so I am not going to cover every aspect of it here. This link will take you to the Cluster API concepts, which discusses all the…

TKG & vSAN File Service for RWX (Read-Write-Many) Volumes

A common question I get in relation to VMware Tanzu Kubernetes Grid  (TKG) is whether or not it supports vSAN File Service, and specifically the read-write-many (RWX) feature for container volumes. To address this question, we need to make a distinction into how TKG is being provisioned. There is the multi-cloud version of TKG, which can run on vSphere, AWS or Azure, and are deployed from a TKG manager. Then there is the embedded TKG edition where ‘workload clusters’ are deployed in Namespaces via vSphere with Tanzu / VCF with Tanzu. To answer the question about whether or not TKG…

Deploying TKG v1.2.0 (TKGm) in an internet-restricted environment using Harbor

In this post, I am going to outline the steps involved to successfully deploy a Tanzu Kubernetes Grid  (TKG) management cluster and workload clusters in an internet restricted environment. [Note: since first writing this article, we appear to have standardized on TGKm – TKG multi-cloud – for this product. This is often referred to as an air-gapped environment. Note that for part of this exercise, a virtual machine will need to be connected to the internet in order to pull down the images requires for TKG. Once these have been downloaded and pushed up to our local Harbor container image…

Deploying Harbor v2.1.0 – Step By Step

Over the thanksgiving break, I took the opportunity to look at the steps required to deploying Tanzu Kubernetes Grid (TKGm) in an air-gapped or internet-restricted environment. The first step to achieving this was to deploy the Harbor Container Image Registry locally in my own environment. While I’ve written about Harbor quite a bit in the early days, I haven’t looked at it in earnest recently, so it was good to revisit it and see what changed. In this post, I’ll walk through the steps involved, and point you to few scripts that I developed to speed up the process. At…