vSAN File Service backed Persistent Volumes Network Access Controls [Video]

A short video to demonstrate how network access to Kubernetes Persistent Volumes, that are backed by vSAN File Service file shares, can be controlled. This allows an administrator to determine who has read-write access and who has read-only access to a volume, based on the network from which they are accessing the volume. This involves modifying the configuration file of the vSphere CSI driver, as shown in the following demonstration. The root squash parameter can also be controlled using this method. This links to a more detailed step-by-step write-up on how to configure the CSI driver configuration file and control…

vSAN File Service backed RWX Persistent Volume Quota [Video]

A short video to demonstrate how vSAN File Service file shares, which are used to back dynamically created Kubernetes read-write-many persistent volumes (PVs) have an implicit hard quota associated with them. Read-Write-Many (RXW) PVs are volumes which can be shared between multiple Kubernetes Pods. For more details about this feature, please check out this earlier blog post.

Adding Network Permissions to Kubernetes PVs backed by vSAN File Share

Last week I looked at how quotas were implicit on Kubernetes RWX Persistent Volumes which were instantiated on vSAN File Service file shares. This got me thinking about another feature of Kubernetes Persistent Volumes –  how could some of the other parameters associated with file shares be controlled? In particular, I wanted to control which networks could access a volume, what access permissions were allowed from that network and whether we could squash root privileges when a root user accesses a volume? All of these options are configurable from the vSphere client and are very visible when creating file shares…

vSAN File Service & Kubernetes PVs with an implicit quota

Earlier this week, I participated in a customer call around vSAN File Service and Kubernetes Persistent Volumes. I have highlighted the dynamic Read-Write-Many Persistent Volume feature of our vSphere CSI driver in conjunction with vSAN File Service before. Read-Write-Many (RWX) volumes are volumes that can be accessed/shared by multiple containers. During the discussion, one question came up in relation to quota, and if it can be applied to Persistent Volumes which are backed by file shares from vSAN File Service, which is the purpose of this post. Now, for those of you who are familiar with vSAN File Service, you…

A closer look at Cluster API and TKG v1.3.1

In this post, I am going to take a look at Cluster API, and then take a look at some of the changes made to TKG v.1.3.1. TKG uses Cluster API extensively to create workload Kubernetes clusters, so we will be able to apply what we see from the first part of this post to TKG in the second part. There is already an extensive amount of information and documentation available on Cluster API, so I am not going to cover every aspect of it here. This link will take you to the Cluster API concepts, which discusses all the…

A first look at vSphere VM Service

In this post, we will take a look at a brand new service that is now available in vSphere with Tanzu, called the vSphere VM Service. This new services enables developers to create virtual machines on vSphere Infrastructure via Kubernetes YAML manifests, just like they would create Tanzu Kubernetes clusters via the TKG service, or PodVMs via the Pod service, both of which are already available in vSphere with Tanzu. Since we feel that many applications will be made up of both containers and VMs, this is the first step in enabling developers to create these multi-faceted applications via the…

CSI Topology – Configuration How-To

In this post, we will look at another feature of the vSphere CSI driver that enables the placement of Kubernetes objects on different vSphere environments using a combination of vSphere Tags and a feature of the CSI driver called topology or failure domains. To achieve this, some additional entries must be added to the vSphere CSI driver configuration file. The CSI driver discovers each Kubernetes node/virtual machine topology, and through the kubelet, adds them as labels to the nodes. Please note that at the time of writing, the volume topology and availability zone feature was still in beta with vSphere…