Using Tanzu Mission Control for managing LDAP/AD access policies for workload clusters

I’ve recently been looking at some of the features around Tanzu Mission Control. Tanzu Mission Control (or TMC for short) is a VMware SaaS offering for managing and monitoring your Kubernetes Clusters across multiple clouds. My particular interest on this occasion was around the access policy features, especially when the Tanzu Kubernetes Grid (TKG) workload clusters were deployed with LDAP/Active Directory integration via the Pinniped and Dex packages that are available with TKG. In this post, I will rollout my TKG management cluster, followed by a pair of TKG workload clusters. The TKG management cluster will be automatically integrated with…

Announcing Tanzu Community Edition from VMware

As we head into VMworld 2021 this week, there will be many announcements about new and updated VMware products and features. However, there is one that I want to bring to your attention. It is something that I have been directly involved in, in some small way, and that something is Tanzu Community Edition.  Tanzu Community Edition (sometimes referred to as TCE), is a free, open source Tanzu Kubernetes (TKG) distribution which has all of the same open source software found in our commercial editions of Tanzu. Personally, I find this to be a really cool announcement for a number…

vSAN File Service backed Persistent Volumes Network Access Controls [Video]

A short video to demonstrate how network access to Kubernetes Persistent Volumes, that are backed by vSAN File Service file shares, can be controlled. This allows an administrator to determine who has read-write access and who has read-only access to a volume, based on the network from which they are accessing the volume. This involves modifying the configuration file of the vSphere CSI driver, as shown in the following demonstration. The root squash parameter can also be controlled using this method. This links to a more detailed step-by-step write-up on how to configure the CSI driver configuration file and control…

Adding Network Permissions to Kubernetes PVs backed by vSAN File Share

Last week I looked at how quotas were implicit on Kubernetes RWX Persistent Volumes which were instantiated on vSAN File Service file shares. This got me thinking about another feature of Kubernetes Persistent Volumes –  how could some of the other parameters associated with file shares be controlled? In particular, I wanted to control which networks could access a volume, what access permissions were allowed from that network and whether we could squash root privileges when a root user accesses a volume? All of these options are configurable from the vSphere client and are very visible when creating file shares…

vSAN File Service & Kubernetes PVs with an implicit quota

Earlier this week, I participated in a customer call around vSAN File Service and Kubernetes Persistent Volumes. I have highlighted the dynamic Read-Write-Many Persistent Volume feature of our vSphere CSI driver in conjunction with vSAN File Service before. Read-Write-Many (RWX) volumes are volumes that can be accessed/shared by multiple containers. During the discussion, one question came up in relation to quota, and if it can be applied to Persistent Volumes which are backed by file shares from vSAN File Service, which is the purpose of this post. Now, for those of you who are familiar with vSAN File Service, you…

TKG v1.3 Active Directory Integration with Pinniped and Dex

Tanzu Kubernetes v1.3 introduces OIDC and LDAP identity management with Pinniped and Dex. Pinniped allows you to plug external OpenID Connect (OIDC) or LDAP identity providers (IDP) into Tanzu Kubernetes clusters which in turn allows you to control access to those clusters. Pinniped uses Dex as the endpoint to connect to your upstream LDAP identity provider, e.g. Microsoft Active Directory. If you are using OpenID Connect (OIDC), Dex is not required. It is also my understanding that eventually Pinniped with eventually integrate directly with LDAP as well, removing the need for Dex. But for the moment, both components are required.…

TKG v1.3 and the NSX Advanced Load Balancer

In my most recent post, we took a look at how Cluster API is utilized in TKG. Note that this post refers to the Tanzu Kubernetes Grid (TKG) multi-cloud version, sometimes referred to as TKGm. I will use this naming convention to refer to the multi-cloud TKG in this post, so that it is differentiated from other TKG products in the Tanzu portfolio. In this post, we will take a closer look at a new feature in TKG v1.3, namely the fact that it now supports the NSX ALB – Advanced Load Balancer (formerly known as AVI Vantage) – to…