Cloud Native Applications Containers Kubernetes VMware VSAN

Fun with Kubernetes on Photon Platform v1.2

In this post, I’m simply going to show you a few useful tips and tricks to see the power of Kubernetes on Photon Platform v1.2. For someone who is well versed in Kubernetes, there won’t be anything ground-breaking for you in this post. However, if you are new to K8s as I am (K8s is short hand for Kubernetes), and are looking to roll out some containerized apps after you have Kubernetes running on Photon Platform, some of these might be of interest. If you are new to K8s, you might like to review some of the terminology used from this older blog post.

1. Adding additional K8S workers/nodes

If you’ve been following my previous posts, you’ll know that I original deployed my K8s cluster/service with a single worker or node. A K8s worker or node on Photon Platform is essentially a VM that can run containers. To scale out the number of workers associated with a K8s cluster, open a browser to the Photon Platform UI, select tenant, project and cluster (service) that you wish to scale out. Here, you will find a resize button. Now you can bump the number of workers up to a higher value. In this example, I am bumping it up to 3. This has the effect of deploying additional worker virtual machines on Photon Platform.

2. Deploy a containerized application

In this example, I am going to use some pre-existing YAML files to deploy some containerized application on K8s, namely nginx and tomcat web servers. Both YAML files  have a similar look and feel, as you will see. First is the tomcat YAML file. It contains both a “Service” section and a “ReplicationController” section. The Service has the port mapping, and it will map the tomcat port 8080 to master port 30001. This will mean that the tomcat service on whichever worker will be accessible from the K8s master via port 30001. This applicationwill  only have 1 pod/replica initially since replicas is set to 1. The image is tomcat, which will be fetched from an external resource once deployment begins.

apiVersion: v1
kind: Service
  name: tomcat-demo-service
    name: tomcat
  type: LoadBalancer
  - port: 8080
    targetPort: 8080
    protocol: "TCP"
    nodePort: 30001
    name: tomcat-server
apiVersion: v1
kind: ReplicationController
  name: tomcat-server
  replicas: 1
    name: tomcat-server
        name: tomcat-server
        - name: tomcat-frontend
          image: tomcat
            - containerPort: 8080

Lets now look at the nginx YAML file. The layout is very similar, with some minor differences. This app will have 3 x pods since replicas is set to 3, the image is nginx, and we have not set a node port, so we will be allocated a mapped port at deployment.

apiVersion: v1
kind: Service
  name: nginx-demo-service
    app: nginx-demo
  type: NodePort
  - port: 80
    protocol: TCP
    name: http
    app: nginx-demo
apiVersion: v1
kind: ReplicationController
  name: nginx-demo
  replicas: 3
        app: nginx-demo
      - name: nginx-demo
        image: nginx
        - containerPort: 80

These YAML files can be uploaded directly into the Kubernetes UI using the 4 steps highlighted below. The Kubernetes management interface can be launched directly from the Photon Platform UI, and is available in the same screen where we resized the cluster in part 1 above. Simply click on the “Open Management UI” button and it will take you to it.

This will automatically create the application defined in the YAML file. The deployments can be queried to see which port they are accessible on from the master node. For example, if I now point my browser at my master node and the node port of 30001 defined in the YAML file, I should see the default tomcat landing page:

Remember the master IP address is not the same IP as the management UI, which uses the load-balancer IP address. This caught me out.

You can use the same process for testing the nginx deployment, but you would have to examine the deployment to see what port the nginx port 80 has been mapped to on the master.

3. Add additional pods for an application

A pod is a term used for a group of one or more containers, the shared storage for those containers, and options about how to run the containers. You could also think of this as a term of applications running on the cluster.

The purpose of a replica controller is to ensure that a specified number of pod “replicas” are running at any one time to ensure that, even in the event of a failure, the pods (or applications running in containers) continues to run.

To increase the number of pods, simply navigate to the replication controller section in the management UI, click on the dots to the right hand side of the replication controller for your application, and select scale. You can then input the number of pods required. This will create additional pods for your application. Earlier, I deployed tomcat with only a single replica. I can’t increase this to 3 using the procedure outlined here.

The application will automatically scale, and now the tomcat-server should be shown with 3/3 pods, the same as nginx.

One thing to note, and it is a question that comes up a lot. There is no way to specify that workers should have affinity to an ESXi host at this point. Therefore, even though we can specify a number of replicas/pods for a service, multiple pods may end up on the same ESXi host. Going forward, my understanding is that there are definitely plans to have some sort of anti-affinity which prevents pods from the same application being placed on workers on the same host, thus having a single failure impact multiple pods.

 4. Using kubectl to manage your K8S deployment

Many folks well versed in Kubernetes will be familiar with the CLI tool, kubectl. You can also use this tool to manage your K8s service on Photon Platform. You can download the kubectl tool from the same page as resizing the cluster and opening the management UI that we saw in part 1. In this example, I have downloaded it to my Windows desktop. The firs thing I must do is get authenticated. VMware provides a very useful photon CLI command to create the kubectl commands that must be run to authenticate against K8S. Here are the commands, which include logging into Photon Platform, setting the tenant and project, locating the K8S service, and then generating the authentication commands using photon service get-kubectl-auth once you have the service id for K8s running on Photon Platform.

E:\PP1.2\.kube>photon -v
photon version 1.2.1 (Git commit hash: dc75225)

E:\PP1.2\.kube>photon target set -c
API target set to ''

E:\PP1.2\.kube>photon target login
User name (username@tenant): administrator@rainpole.local
Login successful

E:\PP1.2\.kube>photon tenant set test-tenant-b
Tenant set to 'test-tenant-b'

E:\PP1.2\.kube>photon project set test-project-b
Project set to 'test-project-b'

E:\PP1.2\.kube>photon service list
ID                                    Name         Type        State  Worker Count
fe1c985b-2705-4d47-bd7e-17937aa26b32  test-kube-b  KUBERNETES  READY  1
Total: 1

E:\PP1.2\.kube>photon service get-kubectl-auth -u administrator@rainpole.local -p xxx fe1c985b-2705-4d47-bd7e-17937aa26b32

kubectl config set-credentials administrator@rainpole.local \
    --auth-provider=oidc \
    --auth-provider-arg=idp-issuer-url= \
    --auth-provider-arg=client-id=d816f411-6da2-475d-af2c-3b85dfc37103 \
    --auth-provider-arg=client-secret=d816f411-6da2-475d-af2c-3b85dfc37103 \
    --auth-provider-arg=refresh-token=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1... \
    --auth-provider-arg=id-token=eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJhZG1pbmlzd... \

kubectl config set-cluster test-kube-b --server= --insecure-skip-tls-verify=true

kubectl config set-context test-kube-b-context --cluster test-kube-b --user=administrator@rainpole.local

kubectl config use-context test-kube-b-context


The output is rather long and obscure (I shortened the token outputs for the post), but the point is that you will have to run the 4 x kubectl config commands output from the previous photon service command. This updates the .kube/config file with the appropriate credentials, cluster information and content information to allow the user to run further kubectl commands. One thing to note when running this in a Windows command window: the trailing ‘\’ do not work. So you will have to edit the first command, remove the trailing ‘\’ and place the command all on one line. Another thing to note is that not all four commands are displayed when the command options are not provided as I have done above. You will need all four kubectl config commands to enable kubectl to run from your environment.

When these commands have been successfully run, you can now start to use kubectl commands to examine your K8S cluster:

E:\PP1.2>kubectl get nodes
NAME           STATUS         AGE       VERSION   Ready,master   21h       v1.6.0    Ready          21h       v1.6.0    Ready          5m        v1.6.0    Ready          5m        v1.6.0

E:\PP1.2>kubectl get pods 
NAME                  READY     STATUS    RESTARTS   AGE 
tomcat-server-wqfzr   1/1       Running   0          20h

E:\PP1.2>kubectl get pods --all-namespaces 
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE 
default       tomcat-server-wqfzr                     1/1       Running   0          20h 
kube-system   k8s-master-                 4/4       Running   10         21h 
kube-system   k8s-proxy-v1-27j2r                      1/1       Running   0          21h 
kube-system   k8s-proxy-v1-4p324                      1/1       Running   0          21h 
kube-system   kube-addon-manager-         1/1       Running   0          21h 
kube-system   kube-dns-806549836-rqwlh                3/3       Running   0          21h 
kube-system   kubernetes-dashboard-2917854236-2k1sv   1/1       Running   0          21h

E:\PP1.2>kubectl get nodes
NAME           STATUS         AGE       VERSION   Ready,master   21h       v1.6.0    Ready          21h       v1.6.0    Ready          5m        v1.6.0    Ready          5m        v1.6.0

E:\PP1.2>kubectl get svc
kubernetes     <none>        443/TCP          21h
tomcat   <pending>     8080:30001/TCP   20h

E:\PP1.2>kubectl create -f C:\Users\chogan\Downloads\nginx.yaml
service "nginx-demo-service" created
replicationcontroller "nginx-demo" created

E:\PP1.2>kubectl get svc
NAME                 CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes      <none>        443/TCP          21h
nginx-demo-service    <nodes>       80:30570/TCP     6s
tomcat        <pending>     8080:30001/TCP   20h

E:\PP1.2>kubectl describe svc nginx-demo-service
Name:                   nginx-demo-service
Namespace:              default
Labels:                 app=nginx-demo
Annotations:            <none>
Selector:               app=nginx-demo
Type:                   NodePort
Port:                   http    80/TCP
NodePort:               http    30570/TCP
Endpoints:    ,,
Session Affinity:       None
Events:                 <none>

As I mentioned in the beginning, if you’re already well-versed in K8s, then this is not going to be of much use to you. However, if you are only just getting started with it, especially on Photon Platform v1.2, you might find this useful.

5 replies on “Fun with Kubernetes on Photon Platform v1.2”

Hi Cormac,
I really appreciate for all the information provided in your blog.
I am having issues setting up the photon platform on esxi 6.5.0 version.
I don’t see any gui / web application after deploying ova template.
It redirects me to cli, and suggest to execute photon-setup platform install -config config.yaml command, to setup the environment.
And executing the script returns with lots of errors.
Is there any key/tweak in order to make the gui available from the ova ?

Comments are closed.