Note that VCF 4.0 is for greenfield deployments only. It is not a version that can be upgraded to from earlier versions of VCF 3.x. Customers who wish to upgrade to VCF 4 will be able to do so when an updated version of VCF 4.x is made available. This is currently being worked on.
Disclaimer: “To be clear, this post is based on a pre-GA version of the VMware Cloud Foundation 4.0. While the assumption is that not much should change between the time of writing and when the product becomes generally available, I want readers to be aware that feature behaviour and the user interface could still change before then.”
Those of you already familiar with VCF will be aware that we use Cloud Builder to bring up the Management Domain and the SDDC Manager. I wrote about the VCF 3.9 bringup here. Not much has changed in that overall process in VCF 4.0. You still populate the spreadsheet or a JSON file with the various infrastructure related information, such as hostnames, IP addresses, VLAN info, licenses, etc. However the management domain has changed. The major change is that we have now moved from NSX-V to NSX-T. Therefore as part of the bring-up process, the Management Domain will now have a 3-node NSX-T cluster deployment included.
Another major change compared to VCF 3.9 is that there is no automated deployment of vRealize Log Insight in the Management Domain in 4.0. Instead, customers will have to deploy the vRealize Suite Lifecycle Manager (vRSLCM) after the bring-up has complete, and rollout the desired vRealize products, e.g. vRealize Operation, vRealize Automation and vRealize Log Insight as mentioned previously.
There is another big difference between VCF 3.9.1 and VCF 4.0 as well. In VCF 3.9.1, we introduced the concept of NSX Application Virtual Networks (AVN), which meant that vRealize components would be deployed on these networks rather than a traditional VLAN. In VCF 4.0, the use of AVNs becomes optional. Therefore you can deploy vRealize components on AVNs or on traditional VLANs – the choice is yours. Other than that, many requirements remain much the same. You still require 4 ESXi nodes for the Management Domain, and a 4 node vSAN cluster is still created, albeit we now use vSAN 7 in VCF 4.0.
Once the spreadsheet or JSON file is populated and the Cloud Builder Appliance is deployed, we can being to roll out the Management Domain. Point a browser to the appliance, login, and you are presented with an option to deploy either VCF natively or VCF on VxRail. I’m going to go with the first option:
Next is your list of prerequisites. Read them carefully and make sure everything is in place.
We now come to the point of uploading the spreadsheet/JSON which we populated with all of our configuration information for the Management Domain. I had comments in the past about why are we still using a spreadsheet/JSON for this. Surprisingly enough, this is what our VCF customers are asking us for. It takes a while to populate this, when you consider you probably need to involve the network team to help, and maybe a speak to a procurement team for the license information. In fact, we went with a UI approach back in VCF 2.x, and our customers asked us to keep the spreadsheet/JSON method. If you have any thoughts on this approach, please leave me a comment. Anyway, on with the deployment:.
After uploading the completed spreadsheet/JSON, the validation commences. If there are any warnings thrown, and this could be for a number of reasons, you will need to acknowledge that warnings were found before continuing with the deployment. In my case, I had some devices formatted with VMFS on the hosts, as well as some NTP configuration issues. I would strongly urge anyone deploying VCF to make sure that all warnings are addressed before continuing with the deployment.
And off we go with the actual deployment. I have captured the tasks which show that NSX-T is now being rolled out in the Management Domain during the initial bring-up.
Which will eventuality complete and present you with the following window to launch SDDC Manager. At this point, we have a 4 x node vSAN 7 cluster, managed by vCenter Server 7.0, with NSX-T.next managers and SDDC Manager for VCF 4.0 deployed. All hosts will be automatically configured as Transport Nodes in NSX-T, with both an Overlay Transport Zone and VLAN Transport Zone.
And if I launch the SDDC Manager, I can login and start thinking about commissioning hosts, building new workload domains and indeed how easy it is to deploy a solution such as Kubernetes on vSphere (formerly known as Project Pacific) which is fully integrated with VCF 4.0. I’ll be showing you how to do that in some upcoming blog posts.
One final word on the inclusion of NSX-T in both the management domain and workload domain. NSX-T will now become the default pod networking solution in the Kubernetes on vSphere Supervisor cluster, as well as playing a significant role for north-south traffic in VMware Tanzu Kubernetes Grid, the guest Kubernetes clusters that can be provisioned on Kubernetes on vSphere. It will offer network switching, routing, firewalls, NAT, load balancers, and more. As we get more into this series of posts, the benefits of this integration will become more and more apparent.