Last week I looked at how quotas were implicit on Kubernetes RWX Persistent Volumes which were instantiated on vSAN File Service file shares. This got me thinking about another feature of Kubernetes Persistent Volumes – how could some of the other parameters associated with file shares be controlled? In particular, I wanted to control which networks could access a volume, what access permissions were allowed from that network and whether we could squash root privileges when a root user accesses a volume? All of these options are configurable from the vSphere client and are very visible when creating file shares…
Earlier this week, I participated in a customer call around vSAN File Service and Kubernetes Persistent Volumes. I have highlighted the dynamic Read-Write-Many Persistent Volume feature of our vSphere CSI driver in conjunction with vSAN File Service before. Read-Write-Many (RWX) volumes are volumes that can be accessed/shared by multiple containers. During the discussion, one question came up in relation to quota, and if it can be applied to Persistent Volumes which are backed by file shares from vSAN File Service, which is the purpose of this post. Now, for those of you who are familiar with vSAN File Service, you…
Tanzu Kubernetes v1.3 introduces OIDC and LDAP identity management with Pinniped and Dex. Pinniped allows you to plug external OpenID Connect (OIDC) or LDAP identity providers (IDP) into Tanzu Kubernetes clusters which in turn allows you to control access to those clusters. Pinniped uses Dex as the endpoint to connect to your upstream LDAP identity provider, e.g. Microsoft Active Directory. If you are using OpenID Connect (OIDC), Dex is not required. It is also my understanding that eventually Pinniped with eventually integrate directly with LDAP as well, removing the need for Dex. But for the moment, both components are required.…
In my most recent post, we took a look at how Cluster API is utilized in TKG. Note that this post refers to the Tanzu Kubernetes Grid (TKG) multi-cloud version, sometimes referred to as TKGm. I will use this naming convention to refer to the multi-cloud TKG in this post, so that it is differentiated from other TKG products in the Tanzu portfolio. In this post, we will take a closer look at a new feature in TKG v1.3, namely the fact that it now supports the NSX ALB – Advanced Load Balancer (formerly known as AVI Vantage) – to…
In this post, I am going to take a look at Cluster API, and then take a look at some of the changes made to TKG v.1.3.1. TKG uses Cluster API extensively to create workload Kubernetes clusters, so we will be able to apply what we see from the first part of this post to TKG in the second part. There is already an extensive amount of information and documentation available on Cluster API, so I am not going to cover every aspect of it here. This link will take you to the Cluster API concepts, which discusses all the…
In this post, we will take a look at a brand new service that is now available in vSphere with Tanzu, called the vSphere VM Service. This new services enables developers to create virtual machines on vSphere Infrastructure via Kubernetes YAML manifests, just like they would create Tanzu Kubernetes clusters via the TKG service, or PodVMs via the Pod service, both of which are already available in vSphere with Tanzu. Since we feel that many applications will be made up of both containers and VMs, this is the first step in enabling developers to create these multi-faceted applications via the…
In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…