PKS deployment revisited – some changes in v1.2.2

It is almost 6 months since I last rolled out a deployment of Pivotal Container Service (PKS). I just did a new deployment this week using some of the more later builds of Pivotal Operations Manager (v2.3), and PKS (v1.2.2) and noticed a number of changes. This post is to take you through those changes and highlight where things are different and might catch you out. I am not going to go through all of the requirements from scratch – there are a number of posts already available which explain the command line tools that you need, and so on. So this post starts with the assumption that the reader has some familiarity with rolling out earlier versions. If that is not the case, then this earlier blog post can be referenced. This is really just to run through the deployment step by step, and I will call out some of the differences. Heads up – this is a rather long post with a lot of screenshots and lots of command line stuff towards the end. However this is pretty normal for PKS.

The first step is to deploy the Pivotal Operations Manager OVA, power it on, point a browser to the appropriate URL, setup credentials and start configuring. As mentioned, I deployed the latest Cloud Foundry Ops Manager from Pivotal which at the time was v2.3 (19th Nov 2018). You can get it here if you have a pivotal.io account. The BOSH Director for vSphere tile will be available by default in the Installation Dashboard, but will need configuring (thus the orange shading). Simply click on it to begin. First step is to add the vCenter Configuration details, alongside some other infrastructure details. Nothing has changed here from the previous version as far as I can tell.

The next configuration step is the Director – the only items we need here are the NTP server(s), enable the VM Resurrector Plugin and enable the Post Deploy Scripts. If the Post Deploy Scripts are enabled, it means that we will roll-out some Kubernetes (K8s) apps such as the K8s dashboard when a cluster is created. We will see how to access it in the very last step of the post.

The next step is to create an Availability Zone or AZ. When PKS is working with vSphere, it is a cluster object in vSphere which equates to an AZ. No changes from before here either.

The next configuration step is related to networking. I’m keeping this very simple for my first deployment, I will map this PKS network directly to a VM portgroup called vlan-50 on my vSphere infrastructure. I am also limiting PKS to use only a subset of IP address in the range of available IP addresses. These IP address will be used for the BOSH VM, PKS VM, and then any Kubernetes masters and workers that get deployed later on.

Final step in the BOSH setup is to choose the AZ and network for this BOSH Director VM. I’ll pick the ones I setup previously. Remember the AZ simply defines which vSphere cluster I am deploying to.

Cool – we’re done. The BOSH tile has turned green in the Installation Dashboard. Now we need to apply the changes (not shown here). Once the changes have taken effect, we can go ahead and add the Pivotal Container Service. This can also be downloaded from Pivotal.io here. I’m using version 1.2.2-build.3 (Nov 19th 2018).

As you can see on the left hand side of the screenshot above, I have already imported PKS. If you haven’t imported it yet, you will need to do that step. After it has been imported, click on the + sign next to the service to add it to the Installation Dashboard. It initially shows up with an orange colour meaning that additional configuration is needed. Now there is a major difference compared to my previous experience – there is no requirement for me to add a new stemcell (essentially a VM image which will be used by Kubernetes master and worker VMs when a cluster is deployed). It seems that PKS has this in place already in this release, so that saves us an additional step, and we can get straight into the configuration part.

The first PKS configuration step is to assign an AZ and network. I am just keeping everything in the same cluster and the same network, so I will just reuse the ones I created previously. The Network is where PKS will be deployed; the Service Network is where my K8s master and workers will be deployed. In this example, they will all be on the same flat network.

This next step for PKS API in an important one as it is a bit different from previously. In the past, you just generated the certificate. You still have to do that, but now there is a new API Hostname (FQDN) entry. To the best of my knowledge, this has simply been moved up from the UAA section, since it was in the UAA section that we previously had to put a UAA URL (UAA is User Account Authentication). As the description below for the FQDN states, this is the hostname used to access PKS. So as long as that DNS name resolves to the IP address of the PKS VM, you should be good. I simply used uaa.rainpole.com, and later on I will update the appropriate UAA DNS entry to resolve to the same IP as the PKS VM.

Next we get to the plans, which are basically the sizing of your K8s clusters. I only setup plan 1, the small plan – this Kubernetes plan uses 1 master and 3 worker nodes when stood up. Be sure to select the AZ for both the master and workers. I don’t remember this from before, so I guess now you can place masters on one vSphere cluster/AZ and workers on another.

Leave the other plans inactive.

Now we get to the VCP, the vSphere Cloud Provider. This is the component that enables us to create persistent volumes on vSphere storage for container based applications, all done via Kubernetes YAML files. There is a lot more detail on VCP here which describes how to do create a persistent volume once you have your Kubernetes cluster deployed. Nothing is different here (well, apart from the AWS option, which is new, but not relevant to us).

One thing to reiterate is that I am keeping this deployment very simple for the moment. I am not modifying the Networking section from the default flannel option, i.e. I am not going to use NSX-T. I am also skipping the Monitoring section which allows Wavefront integration. I may revisit these later. Therefore the only thing left to do is to decide whether or not I want to join the CEIP, the Customer Experience Improvement Program (found under Usage Data). This is a new option in PKS which did not exist previously. That is the final step so now you can save the PKS config and return to the Installation Dashboard view.

Now that the PKS tile has been configured, we can review the pending changes and apply them, just like we did for the BOSH tile previously.

Once you click on the review button, you will see the changes that need to be applied. Click on Apply to make the changes.

This should initiate the configuration task, and hopefully it will success as shown below.

In my case, after the configuration succeed, there was still an “Errand” that needed to be run to do an upgrade on all clusters. Not sure why, but this ran through very quickly when I hit Apply.

You can see the Errand success here. Like I said, it was very quick. I don’t remember having to do this in the past either.

Now let’s quickly jump back to vSphere to see what we have. I can see the Pivotal Operations Manager VM which I deployed from an OVA. I see 2 new VMs in the pcf_vms folder which are my BOSH Director and my PKS VM respectively. I also see two stemcells in place under the pcf_templates as well. Last, there is the PKS CLI VM which I created myself and installed the necessary CLI components, such as pks and kubectl. These can be downloaded from the Pivotal site. I always refer to William Lam’s great blog on how to get this CLI VM setup.

Now before we leave the UI, we need to capture one last piece of information. It is the “admin” secret so that we can authenticate against the PKS API. To get this, navigate to the PKS tile, click it, then select the Credentials tab as shown below. In the credentials list, locate the Pks Uaa Management Admin Client. Then click on the Link to Credential, as shown below.

This will take you to the credential itself. Note the secret down as we will need that shortly. Make sure you pick the right one!

OK – that is us done with the UI. Now we need to switch to the CLI, and ssh onto the PKS CLI VM that I mentioned previously. This VM has my uaac, pks, om, bosh and kubectl CLI tools that we will need to do the final steps and deploy a K8s cluster. Remember this VM is something you need to build yourself, and you will have to download these tools as well.

Once you logged into your PKS CLI VM, verify that you can resolve the name of the PKS API URL that you set up way back at the beginning. Remember that I called mine uaa.rainpole.com, and it needs to resolve to the same IP address as the PKS VM.

cormac@pks-cli:~$ nslookup
> uaa
Server:         127.0.0.53
Address:        127.0.0.53#53
 
Non-authoritative answer:
Name:   uaa.rainpole.com
Address: 192.50.0.141
> exit

 

Next, lets authenticate the admin user, using the secret we noted previously. I am doing this step on my PKS CLI VM. However this step could also be done in the Ops Manager VM, which has the uaac binary as well.

cormac@pks-cli:~$ uaac version
UAA client 4.1.0
cormac@pks-cli:~$
cormac@pks-cli:~$ uaac target https://uaa.rainpole.com:8443–skip-ssl-validation
Target: https://uaa.rainpole.com:8443
Context: admin, from client admin
cormac@pks-cli:~$uaac token client get admin -s Tza_OvUlCfjb5u9x2smxs2RxpJ8Lap1c
Successfully fetched token via client credentials grant.
Target: https://uaa.rainpole.com:8443
Context: admin, from client admin

 

This will create a file called .uaac.yml in your home folder:

cormac@pks-cli:~$cat .uaac.yml
https://uaa.rainpole.com:8443:
  skip_ssl_validation: true
  ca_cert:
  prompts:
    username:
    – text
    – Email
    password:
    – password
    – Password
  contexts:
    admin:
      current: true
      client_id: admin
      access_token: xxxxxxxxxx
      token_type: bearer
      expires_in: 43199
      scope:
      – clients.read
      – clients.secret
      – pks.clusters.manage
      – clients.write
      – uaa.admin
      – clients.admin
      – scim.write
      – pks.clusters.admin
      – scim.read
      jti: 95095ad975054f288cf44482fd1450a7
  current: true
cormac@pks-cli:~$

 

Let’s go ahead and add the admin user with some credentials, and then add that user admin to pks.clusters.admin, someone who can administrate PKS clusters.

cormac@pks-cli:~$ uaac user add admin –emails admin@rainpole.com-p ‘VxRail!23’
user account successfully added
cormac@pks-cli:~$
cormac@pks-cli:~$ uaac member add pks.clusters.admin admin
success

 

OK – we have now done the user account authentication. Now we should be able to do a PKS login with that user account. Note that this command will also build a .pks/creds.yml file in our home folder.

cormac@pks-cli:~$ pks login -a uaa.rainpole.com-u admin -p ‘xxxxxxxx’ -k
API Endpoint: uaa.rainpole.com
User: admin
cormac@pks-cli:~$ ls -al .pks
total 12
drwx——  2 cormac cormac 4096 Nov 20 14:52 .
drwxr-xr-x 20 cormac cormac 4096 Nov 20 14:52 ..
-rw——-  1 cormac cormac 2118 Nov 20 14:52 creds.yml
cormac@pks-cli:~$ cat .pks/creds.yml
api: https://uaa.rainpole.com:9021
ca_cert: “”
username: admin
skip_ssl_verification: true
access_token: xxxxxxxx
refresh_token: xxxxxxxxx

 

Now there are two optional om commands that can be run which will allow you to run bosh commands for tracking tasks, and examining other parts of the deployment such as VMs. I have found that doing these steps can be very useful for troubleshooting and monitoring K8s cluster deployments. The first om command creates a certificate,  the second om command that extracts the BOSH client secret. Once we have those, we can run bosh commands. Unfortunately the om commands are a bit complex.

cormac@pks-cli:~$ om –target https://pivotal-ops-mgr.rainpole.com-u admin -p ‘VxRail!23’ -k curl -p /api/v0/certificate_authorities | jq -r ‘.certificate_authorities | select(map(.active == true))[0] | .cert_pem’ > opsmanager.pem
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Tue, 20 Nov 2018 14:54:13 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.10.3 (Ubuntu)
Strict-Transport-Security: max-age=15552000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: bccebd82-672b-4c7d-a121-162b0824f897
X-Runtime: 0.032703
X-Xss-Protection: 1; mode=block
cormac@pks-cli:~$ cat opsmanager.pem
—–BEGIN CERTIFICATE—–
MIIDUTCCAjmgAwIBAgIVAOlIBl947x4lweC5jqiiCbKdVegMMA0GCSqGSIb3DQEB
CwUAMB8xCzAJBgNVBAYTAlVTMRAwDgYDVQQKDAdQaXZvdGFsMB4XDTE4MTExODEx
MDkwOFoXDTIyMTExOTExMDkwOFowHzELMAkGA1UEBhMCVVMxEDAOBgNVBAoMB1Bp
dm90YWwwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC99mpiZ34YuR3H
WpJl1Gk4VixGi/zzvWhtuEK3XPT2ZCrp8W7pb7p/5z82AdfP95tfmH+qjqHaMfm/
6VieUy9e4VUYOsvVbg+vhO2M1wHQpQxcLOidP8OXxDi7nxTX5zIhZXN2EBJeHoGj
tfdxwOj1RWtoAbVb85G6DmWoQIKxlHSKcsjOKOB90ZdWVKNcdNIkNDVGylD/IUoF
mpChwa2lIBvLhWUc4p5r8m1zBtM5lcEF+D6aVHkJ1zR6QXiJVnDKjgylpchVdeeN
JHmHMoyis3UZX+SA4AfcQssSNolCLw5Le5bda9c5l0e9+oHveAYEqLIV1OGSvPw4
YgXHQYWdAgMBAAGjgYMwgYAwHQYDVR0OBBYEFBv/mIXqQ0A2R1YhVRBHelQl37HC
MB8GA1UdIwQYMBaAFBv/mIXqQ0A2R1YhVRBHelQl37HCMB0GA1UdJQQWMBQGCCsG
AQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB
BjANBgkqhkiG9w0BAQsFAAOCAQEAeL5WE5hTb+/C8gyt2uSFfcI98EkSsbr6KZEL
MLCeEpgGEcLr+laPviPyHx6L+XnxJowtVKBTsQa/gim78K2PqPmyKjyTkqN+MYEY
p6qTTjTZjLzCJmGpo8yFzfVBPwZ8X3EK4XYdPUyOMQFJJyBA5cAI03kijhqvGUnY
Ixd9BTYGnN5JlpBn1F0eF9R6aJBa01y4CBxAlOF7VgTWYCZuS5ZFairTy02L7tcU
wAf84SQH8979c4IHBRx+B9qaRXZUTYNO1FpnMdK7qzzR0kafx5D70z0Yx2+CV/tu
VOxHhSezT2bA8UR2/BA8sY+56RgtQApJeJZcoxzsjNDgZzytWQ==
—–END CERTIFICATE—–
cormac@pks-cli:~$ om –target https://pivotal-ops-mgr.rainpole.com -u admin -p ‘VxRail!23’ -k curl -p /api/v0/deployed/director/credentials/bosh2_commandline_credentials -s | jq -r ‘.credential’
BOSH_CLIENT=ops_manager BOSH_CLIENT_SECRET=LYbG6SUZMfZot97TWIJQ_qZyXOroWyHF BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate BOSH_ENVIRONMENT=192.50.0.140 bosh

 

The output from the above command can then be placed in your .bash_profile and will allow you to run bosh commands, as we shall see shortly. Obviously you will see different values returned.

cormac@pks-cli:~$ cat .bash_profile
PATH=$PATH:/usr/local/bin
export PATH
export BOSH_CLIENT=ops_manager
export BOSH_CA_CERT=/home/cormac/opsmanager.pem
export BOSH_CLIENT_SECRET=LYbG6SUZMfZot97TWIJQ_qZyXOroWyHF
export BOSH_ENVIRONMENT=192.50.0.140
cormac@pks-cli:~$ source .bash_profile

 

Next we can create the Kubernetes cluster. Remember that I will be using a small plan, which is a single master and 3 workers, so 4 VMs in total. Now, you will need to ensure that the external hostname (in this case pks-cluster-01) resolves to the K8s master IP address, so once the master is deployed, you can add this entry to the /etc/hosts file of this CLI VM, or into your DNS.
cormac@pks-cli:~$ pks create-cluster k8s-cluster-01 –external-hostname pks-cluster-01 –plan small –num-nodes 3
Name:                     k8s-cluster-01
Plan Name:                small
UUID:                     32708ce3-779c-4baf-8850-6efc48926d19
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   pks-cluster-01
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  In Progress
Network Profile Name:
Use ‘pks cluster k8s-cluster-01’ to monitor the state of your cluster
cormac@pks-cli:~$

 

This will take a while, so how can I see what is happening? This is where the bosh command comes in. This will monitor the deployment is real-time.

cormac@pks-cli:~$ bosh task
Using environment ‘192.50.0.140’ as client ‘ops_manager’
Task 212
Task 212 | 14:57:42 | Preparing deployment: Preparing deployment
Task 212 | 14:57:43 | Warning: DNS address not available for the link provider instance: pivotal-container-service/4c16c631-ce97-447d-903a-eaedb103113f
Task 212 | 14:57:43 | Warning: DNS address not available for the link provider instance: pivotal-container-service/4c16c631-ce97-447d-903a-eaedb103113f
Task 212 | 14:57:50 | Preparing deployment: Preparing deployment (00:00:08)
Task 212 | 14:58:15 | Preparing package compilation: Finding packages to compile (00:00:00)
Task 212 | 14:58:15 | Creating missing vms: master/1182268d-4d59-44d2-a9d2-36e9e4266dd3 (0)
Task 212 | 14:58:15 | Creating missing vms: worker/f1315396-3520-4011-83a7-7e3f9385c136 (0)
Task 212 | 14:58:15 | Creating missing vms: worker/77327e19-7713-4f4b-a79b-3f59b9e731da (2)
Task 212 | 14:58:15 | Creating missing vms: worker/3e81d2b6-6eda-4355-906f-97110d4a4690 (1)
Task 212 | 14:59:45 | Creating missing vms: worker/77327e19-7713-4f4b-a79b-3f59b9e731da (2) (00:01:30)
Task 212 | 14:59:47 | Creating missing vms: worker/f1315396-3520-4011-83a7-7e3f9385c136 (0) (00:01:32)
Task 212 | 14:59:48 | Creating missing vms: worker/3e81d2b6-6eda-4355-906f-97110d4a4690 (1) (00:01:33)
Task 212 | 14:59:54 | Creating missing vms: master/1182268d-4d59-44d2-a9d2-36e9e4266dd3 (0) (00:01:39)
Task 212 | 14:59:54 | Updating instance master: master/1182268d-4d59-44d2-a9d2-36e9e4266dd3 (0) (canary) (00:02:27)
Task 212 | 15:02:21 | Updating instance worker: worker/f1315396-3520-4011-83a7-7e3f9385c136 (0) (canary) (00:02:05)
Task 212 | 15:04:26 | Updating instance worker: worker/3e81d2b6-6eda-4355-906f-97110d4a4690 (1) (00:02:05)
Task 212 | 15:06:31 | Updating instance worker: worker/77327e19-7713-4f4b-a79b-3f59b9e731da (2) (00:02:07)
Task 212 Started  Tue Nov 20 14:57:42 UTC 2018
Task 212 Finished Tue Nov 20 15:08:38 UTC 2018
Task 212 Duration 00:10:56
Task 212 done
Succeeded
cormac@pks-cli:~$

 

And if you want to see the VMs that are running, use the following bosh command.

cormac@pks-cli:~$ bosh vms
Using environment ‘192.50.0.140’ as client ‘ops_manager’
Task 224
Task 225
Task 224 done
Task 225 done
Deployment ‘pivotal-container-service-07f11cca5e4562ece791’
Instance                                                        Process State  AZ     IPs           VM CID                                   VM Type  Active
pivotal-container-service/4c16c631-ce97-447d-903a-eaedb103113f  running        CH-AZ  192.50.0.141  vm-46b6509c-af82-4ea4-8a46-ec713f7c01e9  large    true
1 vms
Deployment ‘service-instance_32708ce3-779c-4baf-8850-6efc48926d19’
Instance                                           Process State  AZ     IPs           VM CID                                   VM Type      Active
apply-addons/85f98eba-3174-43df-a25a-150c9bf1e0d5  running        CH-AZ  192.50.0.146  vm-c30e83db-0239-462b-9d05-475e0676cadb  micro        true
master/1182268d-4d59-44d2-a9d2-36e9e4266dd3        running        CH-AZ  192.50.0.142  vm-43bb6a8c-0bf1-4317-8fed-37485fd46beb  medium.disk  true
worker/3e81d2b6-6eda-4355-906f-97110d4a4690        running        CH-AZ  192.50.0.144  vm-389ef9de-7d2b-43e0-8a5a-1f2f22922953  medium.disk  true
worker/77327e19-7713-4f4b-a79b-3f59b9e731da        running        CH-AZ  192.50.0.145  vm-812f1627-dfc6-42c9-9e6d-557e78ea8836  medium.disk  true
worker/f1315396-3520-4011-83a7-7e3f9385c136        running        CH-AZ  192.50.0.143  vm-9571cb84-6c63-46ac-9bd8-e84fc8ac8c6a  medium.disk  true
5 vms
Succeeded
cormac@pks-cli:~$

 

Now, there is 1 VM running PKS, but there are currently 5 VMs in the service instance deployment above (this is our K8s cluster). There is an apply-addons VMs which is instantiated multiple time to get the cluster configured correctly. Once that is finished doing what it is doing, you will only see the single master and 3 workers VMs when the command is rerun:

Deployment ‘service-instance_32708ce3-779c-4baf-8850-6efc48926d19’
Instance                                     Process State  AZ     IPs           VM CID                                   VM Type      Active
master/1182268d-4d59-44d2-a9d2-36e9e4266dd3  running        CH-AZ  192.50.0.142  vm-43bb6a8c-0bf1-4317-8fed-37485fd46beb  medium.disk  true
worker/3e81d2b6-6eda-4355-906f-97110d4a4690  running        CH-AZ  192.50.0.144  vm-389ef9de-7d2b-43e0-8a5a-1f2f22922953  medium.disk  true
worker/77327e19-7713-4f4b-a79b-3f59b9e731da  running        CH-AZ  192.50.0.145  vm-812f1627-dfc6-42c9-9e6d-557e78ea8836  medium.disk  true
worker/f1315396-3520-4011-83a7-7e3f9385c136  running        CH-AZ  192.50.0.143  vm-9571cb84-6c63-46ac-9bd8-e84fc8ac8c6a  medium.disk  true
4 vms

 

You can examine the state of the K8s cluster using the pks commd:

cormac@pks-cli:~$ pks clusters
Name           Plan Name UUID                                 Status     Action
k8s-cluster-01 small     32708ce3-779c-4baf-8850-6efc48926d19 succeeded  CREATE
 
cormac@pks-cli:~$ pks cluster k8s-cluster-01

Name: k8s-cluster-01
Plan Name: small
UUID: 32708ce3-779c-4baf-8850-6efc48926d19
Last Action: CREATE
Last Action State: succeeded
Last Action Description: Instance provisioning completed
Kubernetes Master Host: pks-cluster-01
Kubernetes Master Port: 8443
Worker Nodes: 3
Kubernetes Master IP(s): 192.50.0.142
Network Profile Name:

 

The next step is to get our Kubernetes credentials. Remember to ensure that the external hostname (pks-cluster-01) is resolvable from the CLI VM. Now just run the following command:

 

cormac@pks-cli:~$ pks get-credentials k8s-cluster-01
Fetching credentials for cluster k8s-cluster-01.
Context set for cluster k8s-cluster-01.
You can now switch between clusters by using:
$kubectl config use-context <cluster-name>
cormac@pks-cli:~$

 
This will create a .kube./config file in the home directory.
 

cormac@pks-cli:~$ ls -al .kube/
total 12
drwxrwxr-x  2 cormac cormac 4096 Nov 20 15:17 .
drwxr-xr-x 22 cormac cormac 4096 Nov 20 15:17 ..
-rw——-  1 cormac cormac 2802 Nov 20 15:17 config
cormac@pks-cli:~$ cat .kube/config
apiVersion: v1
clusters:
– cluster:
    certificate-authority-data: xxxxxxxx
  name: k8s-cluster-01
contexts:
– context:
    cluster: k8s-cluster-01
    user: ebdfa8fc-a361-40be-8e54-6165015175b3
  name: k8s-cluster-01
current-context: k8s-cluster-01
kind: Config
preferences: {}
users:
– name: ebdfa8fc-a361-40be-8e54-6165015175b3
  user:
    token: xxxxxxxxx

 

Now we can run our kubectl command and query the state of the cluster, deploy apps, etc.

 

cormac@pks-cli:~$ kubectl get nodes
NAME                                   STATUS    ROLES     AGE       VERSION
00a4b3f9-0c0b-48af-912b-f51eca6b669b   Ready     <none>    14m       v1.11.3
418006e5-a522-4b9f-8b44-153e9b61b6c4   Ready     <none>    10m       v1.11.3
b8a5bb31-904a-4d2b-871c-5678799f5a16   Ready     <none>    12m       v1.11.3
cormac@pks-cli:~$

 

One final item you may be interested in getting access to is the Kubernetes dashboard. I mentioned way back in the PKS configuration that if we enable the option to run post deploy scripts, we will get apps such as the Kubernetes dashboard deployed. It is deployed in the kube-system namespace, so use the following command to make sure it is running, and also to get the port (in my case, it is port 30261).
 

cormac@pks-cli:~$ kubectl get svc –namespace=kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
heapster               ClusterIP   10.100.200.229   <none>        8443/TCP        20h
kube-dns               ClusterIP   10.100.200.10    <none>        53/UDP,53/TCP   20h
kubernetes-dashboard   NodePort    10.100.200.161   <none>        443:30261/TCP   20h
metrics-server         ClusterIP   10.100.200.186   <none>        443/TCP         20h
monitoring-influxdb    ClusterIP   10.100.200.2     <none>        8086/TCP        20h
cormac@pks-cli:~$

 

Next you need to determine which K8s worker/node it is running on so you can access it directly via the worker IP address. To do that, use the following command:
cormac@pks-cli:~$ kubectl describe pods –namespace=kube-system | grep -A 5 kubernetes-dashboard | grep Node:
Node:               418006e5-a522-4b9f-8b44-153e9b61b6c4/192.50.0.145
cormac@pks-cli:~$

 

Now open your browser to URL – https://ip-addres-of worker-node:port-number, or in my case, http://192.50.0.145:32061. You will first off be prompted for a config file or token. Simply copy the config file from .kube/config located in your home folder from the PKS CLI VM and upload that. Now you should have access to the dashboard.

 

That completes the post. As you can see, there are a few new items and changes to the setup. Stay tuned while I look at getting some additional items working over the coming weeks.