Site icon CormacHogan.com

Kubernetes on Photon Controller

Another container framework that VMware customers can evaluate on Photon Controller is Kubernetes, developed by Google and now open-sourced. Kubernetes is another popular framework that allows customers to automate, manage and scale containers. Just like my previous article on Mesos and Docker Swarm, the Photon Controller and Kubernetes deployment steps are very similar. While I will show the additional steps required to get Kubernetes deployed, I wanted to focus once again on the “what do I do now?” question as this is pretty much the most common question from folks who have gone through the deployment of the Photon Controller and the creation of a container framework/cluster. For this post, I am going to show you how to use the “kubectl” CLI utility, and show how to get started with some K8S containers (K8S is short-hand for Kubernetes).

*** Please note that at the time of writing, Photon Controller is still not GA ***

*** The steps highlighted here may change in the GA version of the product ***

Let’s go over the Photon Controller deployment steps once more. Deploying the Photon Controller Installer OVA, creating the Photon Controller framework and creating the tenants, resource tickets and projects are identical to those outlined in the Docker Swarm post. I won’t cover them here, so please check back to that post for the basic steps.

1. Create a Kubernetes image

However I still need to get a K8S VMDK/VM “image” and enable my Photon Controller deployment for K8S clusters. This image will be used to create the K8S etcd, master and slave worker machines (VMs) which will form the cluster and allow us to deploy containers. Below are the steps to do just that (you’ll find further detail in the previous posts on Swarm and Mesos). The link to the Kubernetes image can be found here. I recommend using the “-i EAGER” option which uploads the image to the image datastore. This will speed up cluster creation as the image is already in-place. As I mentioned in previous posts, you should also wait for the image to get uploaded to the image datastore before proceeding to the next step of building the cluster to avoid potential timeouts.

I am running these Photon CLI commands from my desktop. The Photon CLI utility can be downloaded from github here. I first show that there is no image, then create the image from one I have locally in my “Downloads” folder on my desktop. Then I enable my deployment for Kubernetes.

C:\Users\chogan>photon image list
Using target 'http://10.27.44.34:28080'
ID                                    Name                             State \
 Size(Byte)   Replication_type  ReplicationProgress  SeedingProgress
8e924447-d248-44b3-a811-1cf62e0caf3d  photon-management-vm-disk1.vmdk  READY \
 41943040000  ON_DEMAND         17%                  100%
Total: 1

C:\Users\chogan>photon image create \
Downloads\photon-kubernetes-vm-disk1.vmdk -n k8-vm.vmdk -i EAGER
Using target 'http://10.27.44.34:28080'
Created image 'k8-vm.vmdk' ID: 702b394e-c529-4252-8d3c-324bf0710522

C:\Users\chogan>photon deployment enable-cluster-type \ 
7c2941f4-5f93-495d-843a-693c6106111e -k KUBERNETES \
-i 702b394e-c529-4252-8d3c-324bf0710522
Are you sure [y/n]? y
Using target 'http://10.27.44.34:28080'
Cluster Type: KUBERNETES
Image ID:     702b394e-c529-4252-8d3c-324bf0710522

C:\Users\chogan>photon deployment show 7c2941f4-5f93-495d-843a-693c6106111e
Using target 'http://10.27.44.34:28080'

Deployment ID: 7c2941f4-5f93-495d-843a-693c6106111e
  State:                       READY
  Image Datastores:            [isilion-nfs-01]
  Use image datastore for vms: false
  Syslog Endpoint:             -
  Ntp Endpoint:                10.27.51.252
  LoadBalancer:
    Enabled:                   true
    Address:                   10.27.44.34
 
Auth:
   Enabled:                   false
  
Stats:
   Enabled:                   false
  
Migration status:
    Completed data migration cycles:          0
    Current data migration cycles progress:   0 / 0
    VIB upload progress:                      0 / 0
  
Cluster Configurations:
    ClusterConfiguration 1:
      Kind:      clusterConfig
      Type:      KUBERNETES
      ImageID:   702b394e-c529-4252-8d3c-324bf0710522    

  Job            VM IP(s)     Ports
  CloudStore     10.27.44.34  19000, 19001
  Deployer       10.27.44.34  18000, 18001
  Housekeeper    10.27.44.34  16000, 16001
  LoadBalancer   10.27.44.34  28080, 4343, 443, 80
  ManagementApi  10.27.44.34  9000
  ManagementUi   10.27.44.34  20000, 20001
  RootScheduler  10.27.44.34  13010
  Zookeeper      10.27.44.34  2181, 2888, 3888

  VM IP        Host IP     VM ID                                 VM Name
  10.27.44.34  10.27.51.8  a22bba76-99c4-407b-9d4f-30844d1cf5a3  ec-mgmt-10-27-51-87f27e
C:\Users\chogan>

 2. Create a Kubernetes cluster

Now that we have the Kubernetes image uploaded to the image datastore, and the deployment is configured for K8S, the cluster can be created. I simply called it “Kube”. You will need 2 static IP address for K8S (one for master node and the other for etcd node) on the management network. The management network will also require DHCP for the slave nodes. You will also need a range of IP addresses on the container network (I have picked 10.2.0.0/16 in this example).

C:\Users\chogan>photon cluster create -n Kube -k KUBERNETES \
--dns 10.16.142.110 --gateway 10.27.47.254 --netmask 255.255.240.0 \
--master-ip 10.27.44.35 --container-network 10.2.0.0/16 --etcd1 10.27.44.36 -s 1
Using target 'http://10.27.44.34:28080'
etcd server 2 static IP address (leave blank for none):

Creating cluster: Kube (KUBERNETES)
  Slave count: 1
Are you sure [y/n]? y
Cluster created: ID = 54578909-6317-4d7d-ab80-005a2f0e9f60
Note: the cluster has been created with minimal resources. You can use the cluster now.
A background task is running to gradually expand the cluster to its target capacity.
You can run 'cluster show 54578909-6317-4d7d-ab80-005a2f0e9f60' to see the state of \
the cluster.

C:\Users\chogan>

 At this point, we can examine the cluster. Only the master and etcd nodes are displayed, slave are not displayed:

C:\Users\chogan>photon cluster show 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'
Cluster ID:             54578909-6317-4d7d-ab80-005a2f0e9f60
  Name:                 Kube
  State:                READY
  Type:                 KUBERNETES
  Slave count:          1
  Extended Properties:  map[etcd_ips:10.27.44.36 master_ip:10.27.44.35 \
  gateway:10.27.47.254 container_network:10.2.0.0/16 netmask:255.255.240.0 \
  dns:10.16.142.110]
VM ID                                 VM Name                                      \
VM IP
074d87d6-e259-42e2-a68a-ddb4f4e88afe  master-459e501f-1d17-4a89-b408-a78902fb6db2  \
10.27.44.35
b3093ee6-b02f-4005-b7ab-bed9ef439d0d  etcd-13cf02af-a5f7-47b6-826c-937d018312ad    \
10.27.44.36

To see the slave(s), the following command can be used:

C:\Users\chogan>photon cluster list_vms 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'
ID                                    Name                                         State
074d87d6-e259-42e2-a68a-ddb4f4e88afe  master-459e501f-1d17-4a89-b408-a78902fb6db2  STARTED
18eca5ba-da65-4f8a-b2d4-f2d426b83317  slave-6cf96592-8a00-4365-a6c3-596e1f7706ea   STARTED
b3093ee6-b02f-4005-b7ab-bed9ef439d0d  etcd-13cf02af-a5f7-47b6-826c-937d018312ad    STARTED
Total: 3
STARTED: 3

C:\Users\chogan>

3. Some K8S terminology

Great, we now have the Kubernetes cluster deployed on top of Photon Controller. As mentioned in the introduction, this inevitably leads us on to the next question which is “what do I do next?”. I mentioned that we will do some demoing with the “kubectl” command, a K8S CLI command. You can download “kubectl” from the K8S github repository. Before we get into using “kubectl”, the K8S CLI utility, lets talk a little bit about the new concepts and terminology associated with K8S.

The first term to clarify is nodes. In K8S, these are the worker machines. In our deployment, Photon Controller rolls out VMs for the master and slave node(s). Each node has the necessary components to run what are called Pods.

So what is a Pod? A pod is a term used for a group of one or more containers, the shared storage for those containers, and options about how to run the containers. You could also think of this as a term of applications running on the cluster.

This brings us onto Replica Controllers. The sole purpose of a replica controller is to ensure that a specified number of pod “replicas” are running at any one time to ensure that, even in the event of a failure, the Pods (or applications running in containers) continues to run.

The other term you will come across is Service. Pods are designed to be transient – they can fail/go-away at any time. However certain applications may be composed of what might be termed front-end Pods and back-end Pods. The front-end Pods should not care which back-end Pod it talks to, so long as it can talk to at least one of them. This idea of a Service allows for the front-end and back-end to be decoupled, but still provide the required functionality/communication.

4. Fun with kubectl

OK, lets now do something useful with kubectl. In this example, my master node has an IP address of 10.27.44.35, so all of the following commands need to point to the master.

First off, let’s get the version of K8S. This can be gleaned from the major and minor numbers shown below.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3+$Format:%h$", \
GitCommit:"$Format:%H$", GitTreeState:"not a git tree"}Server Version: \
version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", \
GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}

Lets now look at the nodes (worker machines). We can see two nodes at the moment, a master and a slave. Nodes in this case correspond to the virtual machines.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get nodes
NAME                                          LABELS                                 \
                              STATUS    AGE
master-459e501f-1d17-4a89-b408-a78902fb6db2   kubernetes.io/hostname=master-459e501f-\
1d17-4a89-b408-a78902fb6db2   Ready     21m
slave-6cf96592-8a00-4365-a6c3-596e1f7706ea    kubernetes.io/hostname=slave-6cf96592-\
8a00-4365-a6c3-596e1f7706ea   Ready     20m

Let’s deploy our first container using kubectl. In this case, I am going to deploy a container with an image of NGINX can give it a name of nginx-app (nginx – engine-X – is essentially a web server). The DOMAIN is the name of my cluster. Note that this is now a replication controller, or “rc”.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 run --image nginx nginx-app \
--port=80 --env="DOMAIN=Kube"
replicationcontroller "nginx-app" created

Next, I am going to create a service using the replication controller (rc) nginx-app and call it nginx-http.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 expose rc nginx-app --port 80 \
--name=nginx-http
service "nginx-http" exposed

Now if I examine my pods, I can see the nginx-app pod, called nginx-app-qauw0:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get pods
NAME                                                    READY     STATUS    RESTARTS\
   AGE
k8s-master-master-459e501f-1d17-4a89-b408-a78902fb6db2  3/3       Running   0       \
   32m
nginx-app-qauw0                                         1/1       Running   0       \
   9m

And if I get additional detail about the replication controller, nginx-app, I can see that there is only a single replica running, meaning that the pod is not highly available. We will address that shortly.


C:\Users\chogan>kubectl -s 10.27.44.35:8080 get rc nginx-app
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR        REPLICAS   AGE
nginx-app    nginx-app      nginx      run=nginx-app   1          18m

Additional details about the replication controller can be got using the following command:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 describe rc nginx-app
Name:           nginx-app
Namespace:      default
Image(s):       nginx
Selector:       run=nginx-app
Labels:         run=nginx-app
Replicas:       1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath \
Reason          Message
  ─────────     ────────        ─────   ────                            ───────────── \
──────          ───────
  29m           29m             1       {replication-controller }                     \
successfulCreate        Created pod: nginx-app-qauw0

And if you wish to see which replication controllers and services are running, this information is also available:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get rc,services
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR        REPLICAS   AGE
nginx-app    nginx-app      nginx      run=nginx-app   1          52m
NAME         CLUSTER_IP   EXTERNAL_IP   PORT(S)   SELECTOR        AGE
kubernetes   10.0.0.1     <none>        443/TCP   <none>          1h
nginx-http   10.0.0.246   <none>        80/TCP    run=nginx-app   46m

Administrators can also look at the services in further detail. Here you can see not only the Cluster IP for the services, but also the IP address endpoints on the container network.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 describe service nginx-http
Name:                   nginx-http
Namespace:              default
Labels:                 run=nginx-app
Selector:               run=nginx-app
Type:                   ClusterIP
IP:                     10.0.0.246
Port:                   <unnamed>       80/TCP
Endpoints:              10.2.9.2:80
Session Affinity:       None
No events.

Let’s now look at how one can scale out the replica controllers as well as the worker machines (nodes/VMs).

5. Scaling out – replica controllers and worker machines

Let’s start by scaling out the number of replica controllers of the nginx-app to 2. We can use the following command.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 scale --replicas=2 rc nginx-app
replicationcontroller "nginx-app" scaled

Let’s run a command that we ran previously and examine the number of replicas. Now we can see that it has increased to 2, meaning the nginx-app pod is now more highly available.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get rc nginx-app
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR        REPLICAS   AGE
nginx-app    nginx-app      nginx      run=nginx-app   2          1h

C:\Users\chogan>

And using the “describe” option, we can see that the number of pods for this application has increased to 2:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 describe rc nginx-app
Name:           nginx-app
Namespace:      default
Image(s):       nginx
Selector:       run=nginx-app
Labels:         run=nginx-app
Replicas:       2 current / 2 desired
Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath\
   Reason          Message
  ─────────     ────────        ─────   ────                            ─────────────\
   ──────          ───────
  14m           14m             1       {replication-controller }                    \
   successfulCreate        Created pod: nginx-app-tyw0q

Let’s use the Photon Controller CLI to add some additional worker machines, in other words, new VMs for running more containers:

C:\Users\chogan>photon cluster resize 54578909-6317-4d7d-ab80-005a2f0e9f60 3
Using target 'http://10.27.44.34:28080'

Resizing cluster 54578909-6317-4d7d-ab80-005a2f0e9f60 to slave count 3
Are you sure [y/n]? y
RESIZE_CLUSTER completed for '' entity
Note: A background task is running to gradually resize the cluster to its target 
capacity.
You may continue to use the cluster. You can run 'cluster show '
to see the state of the cluster. If the resize operation is still in progress, 
the cluster state
will show as RESIZING. Once the cluster is resized, the cluster state will show 
as READY.

C:\Users\chogan>

This may take some time as images for the slave may have to be copied to a new ESXi CLOUD host, if a previous image doesn’t exist, and the -i EAGER option was not used when the image was initially created. During the creation of new slaves, the cluster status changes to RESIZING:

C:\Users\chogan>photon cluster show 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'
Cluster ID:             54578909-6317-4d7d-ab80-005a2f0e9f60
  Name:                 Kube
  State:                RESIZING
  Type:                 KUBERNETES
  Slave count:          3
  Extended Properties:  map[netmask:255.255.240.0 dns:10.16.142.110 \
etcd_ips:10.27.44.36 master_ip:10.27.44.35 gateway:10.27.47.254 \
container_network:10.2.0.0/16]

VM ID                                 VM Name                                      \
VM IP
074d87d6-e259-42e2-a68a-ddb4f4e88afe  master-459e501f-1d17-4a89-b408-a78902fb6db2  \
10.27.44.35
b3093ee6-b02f-4005-b7ab-bed9ef439d0d  etcd-13cf02af-a5f7-47b6-826c-937d018312ad    \
10.27.44.36

When the list of VMs is queried, you may notice that some have “started” and some are in a state of “creating”. In this example, the creating status was due to the image being uploaded to a new ESXi Cloud host where the slave VM/node was being deployed.

C:\Users\chogan>photon cluster list_vms 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'
ID                                    Name                                         State
074d87d6-e259-42e2-a68a-ddb4f4e88afe  master-459e501f-1d17-4a89-b408-a78902fb6db2  STARTED
1130116f-a8e8-4a8d-aedf-7d928bcd0637  slave-63982978-b4e2-418f-b2a5-480864758904   CREATING
18eca5ba-da65-4f8a-b2d4-f2d426b83317  slave-6cf96592-8a00-4365-a6c3-596e1f7706ea   STARTED
b3093ee6-b02f-4005-b7ab-bed9ef439d0d  etcd-13cf02af-a5f7-47b6-826c-937d018312ad    STARTED
d6c539a0-8e33-4364-a309-31d8b57f63ab  slave-12ce90c4-1652-4240-b8c2-57805d734c85   STARTED
Total: 5
STARTED: 4
CREATING: 1

C:\Users\chogan>

If we flip back to the kubectl command, we can now see that there is an additional slave node. The one in a state of “creating” above has not been added yet.

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get nodes
NAME                                          LABELS                                \
                               STATUS    AGE
master-459e501f-1d17-4a89-b408-a78902fb6db2   kubernetes.io/hostname=master-459e501f\
-1d17-4a89-b408-a78902fb6db2   Ready     1h
slave-12ce90c4-1652-4240-b8c2-57805d734c85    kubernetes.io/hostname=slave-12ce90c4\
-1652-4240-b8c2-57805d734c85    Ready     5m
slave-6cf96592-8a00-4365-a6c3-596e1f7706ea    kubernetes.io/hostname=slave-6cf96592\
-8a00-4365-a6c3-596e1f7706ea    Ready     1h

C:\Users\chogan>

Eventually, when the image uploads and the new slave is created, we should see all of the slaves in a “started” state, and kubectl should show us the new slave also:

C:\Users\chogan>photon cluster list_vms 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'
ID                                    Name                                         State
074d87d6-e259-42e2-a68a-ddb4f4e88afe  master-459e501f-1d17-4a89-b408-a78902fb6db2  STARTED
18eca5ba-da65-4f8a-b2d4-f2d426b83317  slave-6cf96592-8a00-4365-a6c3-596e1f7706ea   STARTED
9f197d88-2e50-40a0-9a31-4d628ca526bb  slave-3590320e-d93c-4b15-82e3-90df740a6a68   STARTED
b3093ee6-b02f-4005-b7ab-bed9ef439d0d  etcd-13cf02af-a5f7-47b6-826c-937d018312ad    STARTED
d6c539a0-8e33-4364-a309-31d8b57f63ab  slave-12ce90c4-1652-4240-b8c2-57805d734c85   STARTED
Total: 5
STARTED: 5

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get nodes
NAME                                          LABELS                               \
                                STATUS    AGE
master-459e501f-1d17-4a89-b408-a78902fb6db2   kubernetes.io/hostname=master-459e501f\
-1d17-4a89-b408-a78902fb6db2   Ready     2h
slave-12ce90c4-1652-4240-b8c2-57805d734c85    kubernetes.io/hostname=slave-12ce90c4\
-1652-4240-b8c2-57805d734c85    Ready     26m
slave-3590320e-d93c-4b15-82e3-90df740a6a68    kubernetes.io/hostname=slave-3590320e\
-d93c-4b15-82e3-90df740a6a68    Ready     1m
slave-6cf96592-8a00-4365-a6c3-596e1f7706ea    kubernetes.io/hostname=slave-6cf96592\
-8a00-4365-a6c3-596e1f7706ea    Ready     2h

C:\Users\chogan>

6. Removing the cluster

In this last step, I will tear down the K8S configuration, and also remove the cluster using the Photon CLI command.

First, I will delete the nginx-http service:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 delete service nginx-http
service "nginx-http" deleted

Next, I will remove the nginx-app replication controller:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 delete rc nginx-app
replicationcontroller "nginx-app" deleted

If I check the Pods, I will now only see the master Pod. The nginx-app pod is removed:

C:\Users\chogan>kubectl -s 10.27.44.35:8080 get pods
NAME                                                     READY   STATUS   RESTARTS  AGE
k8s-master-master-459e501f-1d17-4a89-b408-a78902fb6db2   3/3     Running  0         2h

Now I can use the Photon CLI command to delete the cluster:

C:\Users\chogan>photon cluster delete 54578909-6317-4d7d-ab80-005a2f0e9f60
Using target 'http://10.27.44.34:28080'

Deleting cluster 54578909-6317-4d7d-ab80-005a2f0e9f60
Are you sure [y/n]? y
DELETE_CLUSTER completed for '' entity

C:\Users\chogan>

Once again, I hope this shows you how Photon Controller allows you to use ESXi resources for container frameworks such as Kubernetes. In this case K8S “worker machines” are running as VMs. Using the Photon CLI, you can very quickly deploy Photon Controller, and then create a K8S cluster on top. And then, it is just like using K8S as you normally would for container deployment and management. The set of “kubectl” examples will hopefully give you some idea of what you can do, and allow you to go on and explore the capabilities in further detail.

Exit mobile version