Step 1. Download the prerequisite tools
We require the following tools:
- PKS command line interface (pks), for the creation, deletion and querying of K8s clusters
- kubectl command line interface, for communicating to the K8s clusters
- (Optional) Bosh command line interface (bosh), very useful for tracking activities and tasks in Enterprise PKS, though not essential to deploying a K8s cluster
You can get the pks CLI and kubectl from network.pivotal.io, as shown below. You will find these in the Pivotal Container Service (PKS) section. Download the build appropriate to your desktop OS. Add it to your PATH, e.g. /usr/local/bin.
You will also need to pull down the bosh CLI. You can get the latest version (6.2.0) from an AWS repository called bosh-cli-artifacts, using the followings commands in your desktop:
$ wget https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-6.2.0-linux-amd64
$ chmod +x bosh-cli-6.2.0-linux-amd64
$ mv bosh-cli-6.2.0-linux-amd64 /usr/local/bin/bosh
Great – the tools are downloaded, and we can start the deployment of our first K8s cluster.
Step 2. Create PKS Credentials
The very next step is to authenticate to the PKS API server appliance. This can be done using the ‘ubuntu‘ user that we provided a password for during the PKS deployment in part 9 of this series. The command will look something like this:
cormac@pks-cli:~$ pks login -a w01-pks-01.rainpole.com -u ubuntu -k Password: ********** API Endpoint: w01-pks-01.rainpole.com User: ubuntu cormac@pks-cli:~$
This has the effect of building a .pks/creds.yml file in your local directory, which is used by further pks CLI commands.
Step 3. Create BOSH Credentials
To be able to run the bosh CLI, we need to get some BOSH credentials from Ops Manager. The easiest way to do this is to login to the Ops Manager UI. You’ll find the link in Workload Domains > PKS (View Details). Click on the PKS Workload Domain, select Service VMs then click the URL to the Pivotal Ops Manager. This will bring up the Ops Manager UI.
After logging into the UI using the ubuntu credentials, click on the BOSH Director for vSphere tile, select the Credentials tab, and located near the bottom of the list are the Bosh Commandline Credentials, as shown below.
Click on ‘Link to Credential‘, and it will display a number of environment variables, such as BOSH_CLIENT, BOSH_CLIENT_SECRET, BOSH_ENVIRONMENT and BOSH_CA_CERT.
You now need to export these environment variables in your own desktop shell – I typically add them to the .bash_profile so that they are set each time I login. Note that the BOSH_CA_CERT environment variable points to the root_ca_cert location on the Pivotal Ops Manager appliance. You can copy that certificate to your own desktop, then modify that environment variable appropriately. Here is an example on how to do that.
cormac@pks-cli:~$ scp ubuntu@w01-opsmgr-01.rainpole.com:/var/tempest/workspaces/default/root_ca_certificate\ ~/opsmanager.pem Unauthorized use is strictly prohibited. All access and activity is subject to logging and monitoring. ubuntu@w01-opsmgr-01.rainpole.com's password: ********** root_ca_certificate 100% 1208 2.4MB/s 00:00 cormac@pks-cli:~$ export BOSH_CLIENT=ops_manager cormac@pks-cli:~$ export BOSH_CLIENT_SECRET=SoMeSortOfSecretText cormac@pks-cli:~$ export BOSH_ENVIRONMENT=147.70.10.11 cormac@pks-cli:~$ export BOSH_CA_CERT=~/opsmanager.pem
The can now run bosh commands, e.g. bosh deployments or bosh vms. We will see how useful this is shortly to track activity, especially the creation of Kubernetes clusters.
We will not use the kubectl command until we have created our first Kubernetes cluster.
Step 4. Create our first Kubernetes cluster
The following pks CLI command is used to create our very first Kubernetes cluster on Enterprise PKS. But before we run it, let’s take a moment to describe it:
pks create-cluster k8s-clus-01 --external-hostname pks-clus-01 --plan small --num-nodes 3
- k8s-clus-01 – this is the name of the Kubernetes cluster
- –external-hostname pks-clus-01 – this is the DNS name for our master node, and should have an DNS entry or an /etc/hosts entry created for it. kubectl will need to communicate to the API server on the master(s) when we start using it. However we will have to wait and see what IP address (from the floating/load balancer pool) is mapped to the master node of the cluster before we can complete this step
- –plan small – the plan as described in the Enterprise PKS tile in Pivotal Ops Manager. By default, this small plan creates a single master/ETCD node. It also builds masters and workers VMs with 2 CPU, 4GiB Memory and 32GB VMDKs. You can enable and change additional plans from the Ops Manager UI. Note that plans also define how Availability Zones should be used by masters and workers. More on this shortly.
- –num-nodes 3 – this is the number of worker nodes that will be created in this Kubernetes cluster
Let’s run the command:
cormac@pks-cli:~$ pks create-cluster k8s-clus-01 --external-hostname pks-clus-01 --plan small --num-nodes 3 Name: k8s-clus-01 Plan Name: small UUID: 23d0fc1f-200b-41c3-9dba-7bdb53b52180 Last Action: CREATE Last Action State: in progress Last Action Description: Creating cluster Kubernetes Master Host: pks-clus-01 Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): In Progress Network Profile Name: Use 'pks cluster k8s-clus-01' to monitor the state of your cluster cormac@pks-cli:~$
cormac@pks-cli:~$ pks cluster k8s-clus-01 PKS Version: 1.5.0-build.32 Name: k8s-clus-01 K8s Version: 1.14.5 Plan Name: small UUID: ce2567f3-233e-4ec6-b1e4-fafbbecbadeb Last Action: CREATE Last Action State: succeeded Last Action Description: Instance provisioning completed Kubernetes Master Host: pks-clus-01 Kubernetes Master Port: 8443 Worker Nodes: 3 Kubernetes Master IP(s): 147.70.30.2 Network Profile Name: cormac@pks-cli:~$
cormac@pks-cli:~$ bosh task Using environment '147.70.10.11' as user 'director' Task 31 Task 31 | 17:16:11 | Preparing deployment: Preparing deployment Task 31 | 17:16:13 | Warning: DNS address not available for the link provider instance: pivotal-container-service/cec0ab96-6f39-46f1-a277-8a49181dc244 Task 31 | 17:16:13 | Warning: DNS address not available for the link provider instance: pivotal-container-service/cec0ab96-6f39-46f1-a277-8a49181dc244 Task 31 | 17:16:13 | Warning: DNS address not available for the link provider instance: pivotal-container-service/cec0ab96-6f39-46f1-a277-8a49181dc244 Task 31 | 17:16:23 | Preparing deployment: Preparing deployment (00:00:12) Task 31 | 17:16:23 | Preparing deployment: Rendering templates (00:00:07) Task 31 | 17:16:30 | Preparing package compilation: Finding packages to compile (00:00:00) Task 31 | 17:16:30 | Compiling packages: jq/c6a6daa7f64fc4775d11c0d4441d9fcf49506746 Task 31 | 17:16:30 | Compiling packages: nsx-cni/cab69c27665c0ff1a5210adc44fd97efc6b74ea0 Task 31 | 17:16:30 | Compiling packages: nsx-cni-common/dc5b3b6618f30a09827997245e20288c56dd3f20 Task 31 | 17:16:30 | Compiling packages: nsx-python27/75df9f63298d0d2644c6030b160b9b7486a9c195 Task 31 | 17:18:03 | Compiling packages: nsx-cni-common/dc5b3b6618f30a09827997245e20288c56dd3f20 (00:01:33) Task 31 | 17:18:03 | Compiling packages: ncp_rootfs/5973b83f6fdd6a4c5286d675fd0729e98acd61b0 Task 31 | 17:18:10 | Compiling packages: jq/c6a6daa7f64fc4775d11c0d4441d9fcf49506746 (00:01:40) Task 31 | 17:18:13 | Compiling packages: nsx-cni/cab69c27665c0ff1a5210adc44fd97efc6b74ea0 (00:01:43) Task 31 | 17:18:56 | Compiling packages: ncp_rootfs/5973b83f6fdd6a4c5286d675fd0729e98acd61b0 (00:00:53) Task 31 | 17:20:14 | Compiling packages: nsx-python27/75df9f63298d0d2644c6030b160b9b7486a9c195 (00:03:44) Task 31 | 17:20:15 | Compiling packages: openvswitch/2b5d30bcf7b6e19d82dfd7f851701f04e4654dad (00:02:56) Task 31 | 17:23:44 | Creating missing vms: master/5cb6458c-7f6e-4b1c-acfd-f71ad8b0fab7 (0) Task 31 | 17:23:44 | Creating missing vms: worker/14ad78ba-d6f5-443c-ae17-1e6b3af4dd1e (0) Task 31 | 17:23:44 | Creating missing vms: worker/7e544293-cfb6-4291-b348-c6b3aaf6fd5c (1) Task 31 | 17:23:44 | Creating missing vms: worker/26f89b75-40ec-4972-912f-7f5af600d981 (2) (00:01:23) Task 31 | 17:25:07 | Creating missing vms: worker/7e544293-cfb6-4291-b348-c6b3aaf6fd5c (1) (00:01:23) Task 31 | 17:25:15 | Creating missing vms: worker/14ad78ba-d6f5-443c-ae17-1e6b3af4dd1e (0) (00:01:31) Task 31 | 17:25:16 | Creating missing vms: master/5cb6458c-7f6e-4b1c-acfd-f71ad8b0fab7 (0) (00:01:32) Task 31 | 17:25:16 | Updating instance master: master/5cb6458c-7f6e-4b1c-acfd-f71ad8b0fab7 (0) (canary) (00:02:31) Task 31 | 17:27:47 | Updating instance worker: worker/14ad78ba-d6f5-443c-ae17-1e6b3af4dd1e (0) (canary) (00:03:45) Task 31 | 17:31:32 | Updating instance worker: worker/7e544293-cfb6-4291-b348-c6b3aaf6fd5c (1) (00:03:47) Task 31 | 17:35:19 | Updating instance worker: worker/26f89b75-40ec-4972-912f-7f5af600d981 (2) (00:03:57) Task 31 Started Wed Feb 5 17:16:11 UTC 2020 Task 31 Finished Wed Feb 5 17:39:16 UTC 2020 Task 31 Duration 00:23:05 Task 31 done Succeeded
Step 5. Use kubectl to query our K8s cluster
Kubernetes Master IP(s): 147.70.30.2
cormac@pks-cli:~$ pks get-credentials k8s-clus-01 Fetching credentials for cluster k8s-clus-01. Context set for cluster k8s-clus-01. You can now switch between clusters by using: $kubectl config use-context <cluster-name> cormac@pks-cli:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * k8s-clus-01 k8s-clus-01 9b774997-572f-4d22-b6f5-039d74bb4004 cormac@pks-cli:~$
cormac@pks-cli:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION 2236372c-911e-4784-952b-1fe7bf2067f5 Ready <none> 21h v1.14.5 5b1f2193-25fe-4960-9f94-1ea632cf370b Ready <none> 21h v1.14.5 e3a3132b-ae3f-436b-bf01-e485e81701bd Ready <none> 21h v1.14.5 cormac@pks-cli:~$
cormac@pks-cli:~$ bosh vms Using environment '147.70.10.11' as user 'director' Task 36 Task 35 Task 36 done Task 35 done Deployment 'pivotal-container-service-9be5f01641747546dce7' Instance Process State AZ IPs VM CID VM Type Active pivotal-container-service/cec0ab96-6f39-46f1-a277-8a49181dc244 running PKS-MGMT-AZ 147.70.10.12 vm-7776bf04-4eb1-4a7c-881a-7f7e5ef11247 large.disk true 1 vms Deployment 'service-instance_ce2567f3-233e-4ec6-b1e4-fafbbecbadeb' Instance Process State AZ IPs VM CID VM Type Active master/5cb6458c-7f6e-4b1c-acfd-f71ad8b0fab7 running PKS-MGMT-AZ 20.0.0.2 vm-4fb9f77b-8099-4eaf-a191-82db05726640 medium.disk true worker/14ad78ba-d6f5-443c-ae17-1e6b3af4dd1e running PKS-Compute1-AZ 20.0.0.3 vm-990bc1a5-057e-44f0-a71f-e36714a5a3a5 medium.disk true worker/26f89b75-40ec-4972-912f-7f5af600d981 running PKS-Compute3-AZ 20.0.0.5 vm-3bdc0233-a0b7-4033-9798-a8ae33fd6bad medium.disk true worker/7e544293-cfb6-4291-b348-c6b3aaf6fd5c running PKS-Compute2-AZ 20.0.0.4 vm-5cb95724-9929-428d-8b92-182f16a77d17 medium.disk true 4 vms Succeeded
Great! We have successfully deployed our first Kubernetes cluster using Enterprise PKS in a Workload Domain in VMware Cloud Foundation. If you want to try building some containerized applications, and become familiar with Kubernetes in general, feel free to look at some of the Kubernetes 101 introductions that I put together previously.
Step 6. Availability Zones
During the PKS deployment in part 9, one of the features we added to the deployment was Availability Zones (AZ). We created a Management AZ for the PKS components and we created 3 x Compute AZs for the Kubernetes masters and workers. By default, the small plan, which was used in step 4 above to create our first K8s cluster, has a single master and 3 workers nodes. This plan places the master in the Management AZ, and each of the workers in its own Compute AZ. Here is a snippet of plan 1 taken from the PKS tile:
And if we look at how that K8s cluster got deployed in vSphere, we see the following in the inventory, with only a single worker in each Compute AZ, and the master placed in the Management AZ. The sc-xxx are stem-cells, best described as templates for the deployment of the K8s nodes by PKS.
You may enable/modify any of the other plans in Enterprise PKS to meet your Kubernetes needs. For example, if you configured a plan that had 3 masters as well as 3 worker nodes, you could also place each master in its own Compute AZ, as shown here:
Here is how that looks in the vSphere inventory after a K8s cluster with that plan has been deployed. This shows the nodes from both the original small plan deployment of K8s as well as our new plan with 3 x masters. We can now see a new master and a new worker in each of the Compute AZs by comparing it to the previous vSphere inventory screenshot.
Hopefully that shows you the power and simplicity of having Enterprise PKS (and K8s) integrated with Availability Zones on vSphere.
Caveat
I came across one issue with Kubernetes running on Enterprise PKS automatically deployed on top of VCF. When I tried creating my first stateful application on this cluster, utilizing the VMware VCP (vSphere Cloud Provider), I encountered the following error when trying to provision Persistent Volumes (PV):
The first question I had was where was the folder pks_worker_vms defined. I went back and checked my BOSH and PKS tiles in PKS Ops Manager, and I found it here.
It is the Stored VM Folder in the PKS Tile for the Kubernetes Cloud Provider. Now, this should be set to the same as the VM Folder in the BOSH tile which is pcf_vms.
However, it is very easy to resolve. One option is to simply create the expected folder and subfolder pks_worker_vms/xxxx in your vSphere inventory and everything will work as expected. The alternate is to change the Stored VM Folder to pcf_vms instead of pks_worker_vms to make the PKS > Stored VM Folder the same as the BOSH > VM Folder, then apply the changes in Ops Manager. Speaking to some PKS experts, the latter is the preferred approach.
This issue has been reported internally. We will get it addressed asap and create a Knowledgebase article on how to resolve it if you do come across it.
Check out the full range of VMware Cloud Foundation blog posts here.