Yesterday I spun my wheels a bit on an issue I encountered whilst trying to deploy vRealize Suite Life Cycle Manager (vRSLCM) via the vRealize Easy Installer. I downloaded the ISO, opened it up, navigated to the vrlcm-ui-installer folder and clicked on the installer.exe. I selected the Install option, then went through the steps to roll-out the vRSLCM product, as shown below. Almost immediately on completing the deployment I hit this error: “Failed to send http data”: I examined the logs and this is what I found: 2020-07-13T13:49:45.201Z – info: output:PROGRESS 2020-07-13T13:49:50.307Z – info: output: 2020-07-13T13:49:50.310Z – info: output: ERROR…
In many of my recent posts about vSphere with Kubernetes, I use a single user (administrator@vsphere.local) to do all of my work. This allows me to carry out a range of activities without worrying about permissions. This vSphere Single Sign-On (SSO) administrator has “edit” permissions on all of the vK8s namespaces. In this post, I want to look at how to assign some different vSphere SSO users and permissions to different namespaces, and also how these permissions are implemented in the vK8s platform (through the Kubernetes ClusterRole and RoleBinding constructs). Let’s start with a view of what a namespace looks…
VMware recently announced that availability of VMware Cloud Foundation (VCF) 4.0.1. I was particularly interested in this release as it introduced some enhancements around vSphere with Kubernetes deployments on the VCF Management Domain. We refer to the deployment of an application onto the management domain as a VCF consolidated architecture. Whilst we were able to deploy vSphere with Kubernetes on the management domain in VCF version 4.0, it was not seamlessly integrated. In particular, it was not possible to select the management domain to do the necessary vSphere for Kubernetes validation tests. In VCF 4.0.1, it is now possible to…
After spending quite a bit of time looking at vSphere with Kubernetes, and how one could deploy a Tanzu Kubernetes Grid (TKG) “guest” cluster in a namespace with a simple manifest file, I thought it was time to look at other ways in which customers could deploy TKG clusters on top of vSphere infrastructure. In other words, deploy TKG without vSphere with Kubernetes, or VMware Cloud Foundation (VCF) for that matter. This post will look at TKG multi-cloud (TKGm) version 1.1.2 and in particular the tkg command line tool to first deploy a TKG management cluster, and once that is…
A number of readers have hit me up with queries around how they can use the integrated Harbor image repository (that comes integrated with vSphere with Kubernetes) for applications that are deployed on their Tanzu Kubernetes Grid clusters, sometimes referred to as guest clusters. Unfortunately, there is no defined workflow on how to achieve this. The reason for this is that there are a number of additional life-cycle management considerations that we need to take into account before we can fully integrate these components. This includes adding new TKG nodes to the image registry as a TKG cluster is scaled.…
In this short video, I want to show some of the integration points between vSAN 7.0 File Services, and Cloud Native Storage (CNS). We will use the CSI driver that ships with vSphere 7.0 to provision a new read-write-many persistent volume backed by a vSAN file share. A read-write-many persistent volume is one that can be accessed by multiple Kubernetes Pods simultaneously. I will then show how CNS provides the vSphere client all sorts of useful information about the volume. This information is invaluable to a vSphere Admin when trying to figure out how vSphere storage is being consumed when…
Recently I was asked if “statically” provisioned persistent volumes (PVs) in native, vanilla, Kubernetes would be handled by Cloud Native Storage (CNS) in vSphere 7.0 and in turn appear in the vSphere client, just like a dynamically provisioned persistent volume. The short answer is yes, this is supported and works. The details on how to do this are shown here in this post. I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are…