Now that a VCF 4.0 Management Domain has been deployed, we can move onto creating our very first VCF 4.0 Virtual Infrastructure Workload Domain (VI WLD). We will require a VI WLD with an NSX-T Edge cluster before we can deploy Kubernetes on vSphere (formerly known as Project Pacific). Not too much has changed in the WLD creation workflow since version 3.9. We still have to commission ESXi hosts before we can create the WLD. But something different to previous versions of VCF is that today in VCF 4.0 we can automatically provision NSX-T Edge clusters from SDDC Manager to…
On March 10th, VMware announced a range of new updated products and features. One of these was VMware Cloud Foundation (VCF) version 4.0. In the following series of blogs, I am going to show you the steps to deploy VCF 4.0. We will begin with the deployment of a Management Domain. Once this is complete, we will commission some additional hosts and build our first workload domain (WLD). After that, we will deploy the latest version of NSX-T Edge Cluster to our Workload Domain. The great news here is that this part has now been automated in VCF 4.0. Finally,…
One of the most common requests in relation to vSAN performance is how much CPU and memory does vSAN actually consume on an ESXi host, i.e. what is the overhead of running vSAN. Through the vSAN Performance Service, we have been able to show both host and vSAN CPU usage for some time. However, up to now, we have only been able to show host memory usage, and not overhead attributed to vSAN. It has also been extremely difficult to determine how much memory vSAN required. Way back in 2014, with the first vSAN version 5.5 release, I wrote this…
On March 10th 2020, we saw a plethora of VMware announcements around vSphere 7.0, vSAN 7.0, VMware Cloud Foundation 4.0 and of course the Tanzu portfolio. The majority of these announcements tie in very deeply with the overall VMware company vision which is any application on any cloud on any device. Those applications have traditionally been virtualized applications. Now we are turning our attention to newer, modern applications which are predominantly container based, and predominantly run on Kubernetes. Our aim is to build a platform which can build, run, manage, connect and protect both traditional virtualized applications and modern containerized…
I recently wanted to deploy a newer versions of Kubernetes to see it working with our Cloud Native Storage (CNS) feature. Having assisted with the original landing pages for CPI and CSI, I’d done this a few times in the past. However, the deployment tutorial that we used back then was based on Kubernetes version 1.14.2. I wanted to go with a more recent build of K8s, e.g. 1.16.3. By the way, if you are unclear about the purposes of the CPI and CSI, you can learn more about them on the landing page, here for CPI and here for…
In my most recent VMware Cloud Foundation post (part 13), I highlighted the fact that if you used NSX-T as the networking platform for your workload domain (WLD), you could not attach vRealize Automation (vRA) to such a WLD via SDDC Manager. In that previous post, I showed how to manually deploy the vRA proxy agents on the Proxy VMs. These Proxy VMs were already deployed via SDDC Manager as part of the overall vRA deployment through SDDC Manager, but the agents were not installed at this point. If NSX-V was used as the networking platform for the WLD, then…
I’m still on my VMware Cloud Foundation v3.9 journey. My latest task was to connect my vRealize Components to my Workload Domains (WLDs). In part 2 I deployed vRealize Log Insight (vRLI) and vRealize Operations (vROps), and then in part 3 and part 4, I rolled out vRealize Automation. Now I wanted to connect them to the WLDs that I had rolled out previously. SDDC Manager makes this really easy. In just a couple of clicks I had connected vRLI and vROps to both VI WLDs. However, on trying to connect my vRealize Automation (vRA) 7.6 to my WLDs, I…