Cloud Native Storage (CNS) in vSphere with Kubernetes/Tanzu (Video)

A short video explaining the role of the vSphere CSI (Container Storage Interface) driver and CNS (Cloud Native Storage) in both the vSphere with Kubernetes/Tanzu Supervisor Cluster and in the Tanzu Kubernetes Grid (TKG) Guest Cluster. This video discusses the role of the CSI driver in the Supervisor cluster, and the pvCSI driver (para-virtual CSI driver) in the TKG guest cluster. We also look at how the pvCSI communicates CNS control plane in the vCenter Server via the CSI driver in the Supervisor Cluster to request Persistent Volume operations on behalf of the Guest Cluster.

A closer look at vSphere with Kubernetes Permissions

In many of my recent posts about vSphere with Kubernetes, I use a single user (administrator@vsphere.local) to do all of my work. This allows me to carry out a range of activities without worrying about permissions. This vSphere Single Sign-On (SSO) administrator has “edit” permissions on all of the vK8s namespaces. In this post, I want to look at how to assign some different vSphere SSO users and permissions to different namespaces, and also how these permissions are implemented in the vK8s platform (through the Kubernetes ClusterRole and RoleBinding constructs). Let’s start with a view of what a namespace looks…

vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture

VMware recently announced that availability of VMware Cloud Foundation (VCF) 4.0.1. I was particularly interested in this release as it introduced some enhancements around vSphere with Kubernetes deployments on the VCF Management Domain. We refer to the deployment of an application onto the management domain as a VCF consolidated architecture. Whilst we were able to deploy vSphere with Kubernetes on the management domain in VCF version 4.0, it was not seamlessly integrated. In particular, it was not possible to select the management domain to do the necessary vSphere for Kubernetes validation tests. In VCF 4.0.1, it is now possible to…

Integrating embedded vSphere with Kubernetes Harbor Registry with TKG (guest) clusters

A number of readers have hit me up with queries around how they can use the integrated Harbor image repository (that comes integrated with vSphere with Kubernetes) for applications that are deployed on their Tanzu Kubernetes Grid clusters, sometimes referred to as guest clusters. Unfortunately, there is no defined workflow on how to achieve this. The reason for this is that there are a number of additional life-cycle management considerations that we need to take into account before we can fully integrate these components. This includes adding new TKG nodes to the image registry as a TKG cluster is scaled.…

Deploy Harbor embedded Image Registry on vSphere with Kubernetes (Video)

This short video will demonstrate how to deploy the embedded Harbor Image Registry in vSphere with Kubernetes. It will highlight the different PodVMs used for Harbor, as well as the Persistent Volumes required by some of the PodVMs. The demo will look at the integration between namespaces created in vSphere with Kubernetes and the Harbor projects. I will also show how to download the CA certificate to a client to enable remote access to Harbor. Finally, I will show how to tag and push some images up to the image registry.

Create a new vSphere with Kubernetes namespace (Video)

This short video will demonstrate how to create a new namespace in vSphere with Kubernetes, including Permissions, Storage and Resource Limits. This namespace concept allows vSphere with Kubernetes to implement a type of multi-tenancy, where vSphere resources can be divided up and allocated to individual developers or teams of developers. Thus it is quite a bit different to a native Kubernetes namespace. The video also looks at Harbor Image Registry integration, where a new Harbor project is created per namespace. It also shows where to find details about Kubernetes Compute, Storage and Network artifacts associated with the namespace.

vSphere with Kubernetes on VCF 4.0 Consolidated Architecture

Since the release of VMware Cloud Foundation (VCF) 4.0 over 1 month ago, I have been asked one question repeatedly – when can I run vSphere with Kubernetes (formerly known as Project Pacific) on a VCF 4.0 Consolidated Architecture? In other words, when can I deploy vSphere with Kubernetes on the Management Domain rather than building a separate VI Workload Domain to run it. The main reason for this request is because this reduces the number of ESXi hosts required to run vSphere with Kubernetes from 7 down to 4. So I am delighted to announce that we now have…