Deploying TKG v1.2.0 in an internet-restricted environment using Harbor

In this post, I am going to outline the steps involved to successfully deploy a Tanzu Kubernetes Grid  (TKG) management cluster and workload clusters in an internet restricted environment. This is often referred to as an air-gapped environment. Note that for part of this exercise, a virtual machine will need to be connected to the internet in order to pull down the images requires for TKG. Once these have been downloaded and pushed up to our local Harbor container image registry, the internet connection can be removed and we will work in a completely air-gapped environment. Note that TKG here…

Deploying Harbor v2.1.0 – Step By Step

Over the thanksgiving break, I took the opportunity to look at the steps required to deploying Tanzu Kubernetes Grid (TKGm) in an air-gapped or internet-restricted environment. The first step to achieving this was to deploy the Harbor Container Image Registry locally in my own environment. While I’ve written about Harbor quite a bit in the early days, I haven’t looked at it in earnest recently, so it was good to revisit it and see what changed. In this post, I’ll walk through the steps involved, and point you to few scripts that I developed to speed up the process. At…

A closer look at Antrea, the new CNI for vSphere with Tanzu guest clusters

I’ve spent quite a bit of time highlighting many of the new features of vSphere with Tanzu in earlier blog posts. In those posts, we saw how vSphere with Tanzu could be used to provision Tanzu Kubernetes Grid (TKG) guest clusters to provide a native, upstream-like, VMware supported Kubernetes. In this post, I want to delve into the guest cluster in more detail and examine the new, default Container Network Interface (CNI) called Antrea that is now shipping with the TKG guest cluster. Antrea provides networking and security services for a Kubernetes cluster. It is based on the Open vSwitch…

vSAN 7.0U1 – File Service SMB Support

One of the new, exciting features in vSAN 7.0U1 is the extension to vSAN File Service. As well as supporting NFS v3 & v4.1, we now also support SMB (Server Message Block) protocols v2 & v3. This protocol is commonly associated with Windows File Shares. In this post, I will go through the new configuration steps, and then we shall present the new created SMB file share to a Windows desktop. One of the new prerequisites, which wasn’t needed with NFS file shares, is that Active Directory integration is required for SMB. We will see this new step during the…

Creating developer users and namespaces (scripted) in TKG “Guest” Clusters

I’ve spent a lot of time recently on creating and building out vSphere with Tanzu environment, with the goal of deploying a Tanzu Kubernetes “guest” cluster. I frequently used the kubectl-vsphere command to logout of the Supervisor namespace context and login to the Guest cluster context. This allowed me to start deploying stateful and stateful apps in my Tanzu Kubernetes Guest cluster. I thought no more about this step until a recent conversation with my colleague Frank Denneman. He queried whether or not Kubernetes developers would actually have vSphere privileges to do this. It was a great question which led…

Persistent Volume Placement in HCI-Mesh deployments

One of the new features introduced in vSphere 7.0U1 is HCI-Mesh, the ability to remotely mount vSAN datastores between vSAN clusters managed by the same vCenter Server. My buddy and colleague Duncan has done a great write-up on this topic on his yellow-bricks blog. In this post, I am going to look at how to address the situation of selecting the correct vSAN datastore when provisioning Kubernetes Persistent Volumes in an environment which uses HCI-Mesh. Let’s start with why this situation needs additional consideration. Let’s assume that there is a vSphere cluster that have vSAN enabled, and thus this cluster…

vSAN Capacity Management in v7.0U1

With the release of vSAN 7.0U1, a major change was made with regards to what was termed “slack space” requirements. This basically referred to how much space should be set aside on the vSAN datastore for operational and rebuild purposes. I have had a few queries about this recently, so I thought I would take the opportunity to highlight some of the capacity management features now available in vSAN.  This would also be a good time to revisit the advanced options for Automatic Rebalance, as well as discuss the Reactive Rebalance features that we have had in vSAN for some…