Preparation
You will need to have a number of IP addresses and FQDNs setup in advance of creating the VI Workload Domain. These are:
- vCenter Server for the new WLD
- NSX-T Manager 1 (previously controller, but now joint manager/controller in v2.5)
- NSX-T Manager 2
- NSX-T Manager 3
- NSX-T Manager Cluster
Download NSX-T Bundle
Before we begin, under Repository > Bundles, we need to download the Install Bundle for NSX-T in the SDDC Manager Repository:
License NSX-T
Next, under Administration > Licenses, add a new license for NSX-T:
Of course, valid vSphere and vSAN licenses will be needed as well. Note that I need vSAN licenses only because I plan to roll out vSAN as my primary storage in my new WLD.
Create Virtual Infrastructure (VI) Workload Domain
Once the NSX-T bundle has been downloaded and validated, and the license key for NSX-T successfully added, we can begin the creation of the VI WLD. In SDDC Manager, navigate to Inventory > Workload Domains and click on the + Workload Domain button, and from the drop-down select VI – Virtual Infrastructure.
Storage Selection
Workload Domains can consume different storage types, such as vSAN, NFS and Fibre Channel storage. As I mentioned earlier, in this WLD I am going to select vSAN as the type of storage that I want to use with this workload domain. This will be automatically configured later.
VI Configuration
In this next stage of the deployment wizard, a VI name for the workload domain needs to be added, along with Cluster name and an Organization name:
Compute (vCenter Details)
The next screen is all about adding vCenter Server details and configuration, including information such as IP address, FQDN, and a Root password:
Networking (NSX-T Details)
The next section is important as it is where you populate the Fully Qualified Domain Names (FQDNs) and the IP address of the NSX-T Controllers/Managers and Manager Cluster. You will need to have 4 x IP addresses and FQDNs set aside, and ensure that forward and reverse DNS lookups is working. Spend a bit of time ensuring that this is working correctly before continuing to the next step.
One thing that may be a bit confusing is the use of NSX-T Manager throughout this wizard. I guess the NSX-T team have moved away from the term controller now (at least since I last used the product). Note also that there are only 3 appliances deployed; the NSX-T Manager Cluster IP is not a separate appliance, but the manager functionality is now clustered across the 3 appliances, so the Manager and Controller functionality are now combined into the same appliances.
[Edit] I wanted to add a short note about the VLAN ID in the Overlay Networking section that I omitted in the original post. The overlay network is the network used for your overlay tunnels in NSX-T. When we build the WLD, all hosts will be plumbed up with VMkernel interfaces which will allow them to communicate on this overlay (typically vmk10 and vmk11). In a future post, we will see how NSX-T Edges can also be added to this overlay. However, the NSX-T Edge overlay should use a different VLAN and not the one picked here. The VLAN chosen here and the one used for the NSX-T Edge later on should be routable – in other words, they need to be able to communicate to each other. Choose the VLAN ID carefully, as this is what will be used on the overlay network. And note that there needs to be a DHCP server available on that VLAN to provide IP address for the tunnel endpoints (VMkernel portgroups).
Here is the configuration from my lab environment:
vSAN Storage
Near the beginning of the workflow, we requested vSAN storage for this WLD. In the next screen, you are asked some decisions regarding how that storage is to be consumed. With 3 hosts, we can set a Failures To Tolerate (FTT) value of 1. This means that with 3 hosts, FTT=1 results in RAID-1 mirroring as the storage policy. A RAID-1 FTT=1 policy places a copy of the data on two hosts and a quorum/witness on a third host. This means that we can have a host failure in this WLD and continue to have virtual machines and their data available. With more hosts, the FTT value could be increased. In fact, this form also seems to allow you to create an FTT=0 as well, but I wouldn’t personally recommend this unless you have a valid reason for deploying unprotected data.
One other option available here is the ability to enable Deduplication and Compression technologies on vSAN for space savings. This is disabled by default.
Host Selection
Now I can select the 3 hosts that I commissioned in part 5. Note the best practice, stating that hosts should have similar/identical configurations. Also of interest is the total amount of resources available across the 3 hosts. Also, in this example, my storage is all Solid State Disk drives (SSD), so this will be an All-Flash vSAN deployment:
Licenses
I now need to select the licenses that will be consumed by the infrastructure in this workload domain. I added an NSX-T license earlier, but in earlier posts I added licenses for vSAN and vSphere as well. Now I need to choose the appropriate licenses for these components. They will be displayed in the drop-down for each component.
Object Names
This screen is really part of the summary. It shows you how the values inputted previously will be used to create the names of the various objects that will be deployed in the new VI Workload Domain.
Review
The final screen of the workflow which will let you review the previous inputs one last time before you click Finish and start the Virtual Infrastructure Workload Domain deployment.
Monitoring VI Workload Domain deployment
In SDDC Manager, there is a new task initiated for the creation of the workload domain. This appears in the task bar at the bottom of the SDDC Manager UI. You can interact with this task bar and look at the subtasks involved in creating the domain. For example, here are some of the sub-tasks from the very early stages of the domain creation where the distributed switch is getting created and configured, and the hosts are getting added to the cluster:
Using the tools to the right of the task bar, we can extend it to look at more than 3 subtasks at a time. Click the middle of the 3 icons on the right hand side to see a full screen view of the tasks, as shown below:
And we can follow these sub-tasks all the way down to setting up NSX-T as well:
This sub task flow give you a pretty good idea how far the deployment has progressed, and at what stage it has reached. For a more detailed view of the VI WLD deployment progress, SSH to the SDDC Manager appliance. You can then tail the log file: /var/log/vmware/vcf/domainmanager/domainmanager.log
Of course, this log file has a lot more detail then the task view above, but it can be very useful for checking the reason for a particular task failure. Note that if a task does fail, the deployment stops to allow you to take actions to address the issue. You can then resume the VI WLD deployment from where it stopped.
Verify VI WLD deployment success
If the workload domain has successfully deployed, you should see the new domain in the Inventory > Workload Domains view in SDDC Manager. If you click on the View Details option in the Virtual Infrastructure (VI) box, it will show you additional information about the existing workload domains:
Now we have both the MGMT (management) domain and a new workload domain. If you click on the new workload domain name (wld01) in the list, a new view is rendered in the UI which will provide more information about the domain. If you click on Services, you will get a link to both the vCenter Server and the NSX-T Manager:
Here is the vCenter view of our new Workload Domain (wld01), which shows the 3 hosts, the vSAN datastore and both NSX-T network segments as well as the distributed switch portgroups. Note that you should not delete the distributed switch portgroups as these will be used to revert to the original configuration should you ever to decide to delete the workload domain.
Compute | Storage | Networking |
Since this deployment also included a deployment of NSX-T, so let’s log into the NSX-T Manager and look at some of the configuration that was initiated there:
So we can see 3 NSX Management Nodes and a Host Cluster. We can see 3 Host Transport Nodes (our 3 ESXi hosts) as well as 2 Transport Zones, one for Overlay (East/West traffic) and one for VLAN (North/South). This all looks good. Now, my next objectives are to connect this workload domain up to my vRealize products, as well as rollout my next Workload Domain, PKS. The latter will require the manual deployment of an NSX-T Edge, which will need to be done first. Watch this space for further updates!
For a full list of posts on VMware Cloud Foundation, click here.