Site icon CormacHogan.com

Getting started with VCF Part 9 – PKS deployment

We are nearing the end of our journey with Getting Started with VMware Cloud Foundation (VCF). In this post, we will go through the deployment of Enterprise PKS v1.5 on a Workload Domain created in VCF v3.9. We’ve been through a number of steps to get to this point, all of which can be found here. Now we have some of the major prerequisites in place, notably NSX-T Edge networking and PKS Certificates, so we can proceed with the Enterprise PKS deployment. However, there are still a few additional prerequisites needed before we can start. Let’s review those first of all.

Enterprise PKS Prerequisites

Let’s recap on some of the prerequisites that need to be in place before we can commence the deployment of Enterprise PKS.

The full range of prerequisites can be found in the official VCF 3.9 documentation.

For this post, here are the Node and Pod IP blocks that I plan on using, 20.0.0.0/16 and 30.0.0.0/16 respectively.

And here is the Floating IP Address Pool which will provide Load Balancer IP addresses to my Kubernetes Services, 147.70.30.0/24.

One final item to note; I have already created a PKS Management network segment and a PKS Services network segments. See step 11 in part 7 of this blog series on VCF for more details on how I did this.

[Update] What became apparent after I completed this deployment is that the PKS Services network segment is that it is not used anywhere. In earlier releases of Enterprise PKS, all the K8s nodes for all the K8s clusters were placed on the service network. In later versions of PKS with NSX-T integration, the capability to support a K8s cluster node on its own LS/T1 was introduced. Thus the service network is now deprecated. So even though you need to provide this service network as part of the deployment, and even though it is still plumbed into the BOSH and PKS tiles in Enterprise PKS 1.5, it is no longer used. In fact, I’ve been informed that this service network entry has been removed from the BOSH and PKS tiles in the latest versions of PKS. Kudos for Francis Guiller for this useful information.

Now, with all the prerequisites in place, we can proceed with the actual Enterprise PKS deployment.

Enterprise PKS deployment on a Workload Domain in VCF

In the VCF SDDC Manager UI, navigate to Workload Domain, click on the +Workload Domain button and choose PKS. The first thing that will popup is a PKS Deployment Prerequisites window. We have covered all of these now, so you can simply click Select All, and continue:

Next, accept the EULA. There are 3 in total to select:

Now you get to General Settings. Here, we must provide the name of the PKS Solution, choose the Workload Domain on which Enterprise PKS will be provisioned from the drop-down list, and populate the various passwords for Operations Manager. Note also the default username of ubuntu, which will come in useful later on.

Now you need to populate some of the NSX-T Settings, such as the Tier-0 Logical Router, Node Block, Pod Block and Load Balancer/Floating IP Pool. If everything has been configured correctly, these should all be available from the drop-downs.

Next, it is time to add some PKS Settings. Here, the fully qualified domain name of the PKS appliance is requested (API FQDN). I am not deploying Harbor in this example, but if the Harbor option is checked, you need to provide the FQDN of the Harbor Registry appliance/VM. The rest of the settings relate to the Operations Manager appliance.

Certificates are next. Refer back to part 8 of this VCF series for information about the format and naming convention required by this part of the installer wizard. As mentioned in that post, there are some nuances here. Once again, if Harbor was selected previously, a certificate and private key for that appliance would also be required.

Now we start to add some information around the Management Availability Zone. The Management Network refers to the PKS Management network overlay that we built earlier (see VCF blog part 7). Note the reserved IP range – it should not include the network gateway, nor the IP address of BOSH, PKS or Harbor VMs, but it should include the IP address of the Operations Manager VM. For the Management AZ, provide a name, select the cluster and choose the correct Resource Pool (which must have been created as part of the prerequisites).

This brings us to the Kubernetes Availability Zone. Here you simply populate network information, similar to what we did for the management network. Note the network is the PKS Service network segment (overlay) built previously. [Update] However, as we later found out, this is no longer used for the K8s nodes. Instead NSX-T creates a Logical Switch/T1 for each K8s cluster. However, you still need to populate this field.

What you might notice in the last window is that we did not provide any information about Availability Zones. This is done in the next window, when we begin to populate the Compute Availability Zones for the Kubernetes cluster. Typically, this is where our Kubernetes masters worker nodes get deployed, but this is all decided by the plan that is used to create the cluster. We will see this in the next part of this series of blog posts. I suppose we could have referred to them as PKS Services Availability Zones to keep it aligned with the Management AZ. As I said, we will create 3 Resource Pools and map those Resource Pools to AZs. When we deploy our Kubernetes cluster, our plan will decide if master and workers are placed in the different AZs by PKS for availability. More on this in the next post.

And that’s it. We can now Review the details before having the values validated.

And if the Review looks good, we can proceed with the Validation step. This checks a whole bunch of things, such as DNS resolution, no IP addresses overlapping the reserved ranges, and of course the validity of the certificates. All going well, there should be no errors and all status should show as INFO.

You can now click on the FINISH button to start the PKS deployment.

Monitoring Enterprise PKS Workload Activation

What you should now notice is that there is a new PKS Workload Domain in the process of activating.

And there are a lot of subtasks as part of this PKS deployment:

While this can offer some insight into what is happening, you can get further details by SSH’ing onto the SDDC Manager and tailing solutionsmanager.log found under /var/log/vmware/vcf/solutionsmanager. This gives some very good details in the early part of the deployment, but once BOSH, PKS and Harbor are being deployed and configured, there is not much visibility into what is going on. When it gets to that stage of the deployment, you can actually connect to the Operations Manager UI, login as the ubuntu user, and monitor what is happening from there. In the Summary tab of the PKS domain, click on the Pivotal Ops Manager link, and log in using the ubuntu credentials that you provided in the General Settings part of the deployment wizard.

From there, you can see that Ops Manager is ‘Applying changes‘ to the configuration. Click on the ‘Show Progress’ button to see more details about what is happening.

Here is the BOSH & PKS deployment and configuration logs as view from my deployment.

Assuming everything has worked successfully (this will take some time to deploy), you should end up with an Active PKS workload domain on VMware Cloud Foundation.

The next step is to use PKS to create a Kubernetes cluster. I will show you how to do that in the next post. For now, sit back and bask in the glory of having successfully deploy Enterprise PKS in a Workload Domain in VMware Cloud Foundation. That’s what I’m doing 🙂

Exit mobile version