PKS Revisited – Project Hatchway / K8s vSphere Cloud Provider review

As I am going to be doing some talks around next-gen applications at this year’s VMworld event, I took the opportunity to revisit Pivotal Container Services (PKS) to take a closer look at how we can set persistent volumes on container based applications. Not only that, but I also wanted to leverage the vSphere Cloud Provider feature which is part of our Project Hatchway initiative. I’ve written about Project Hatchway a few times now, but in a nutshell this allows us to create persistent container volumes on vSphere storage, and at the same time set a storage policy on the volume. For example, when deploying on vSAN, you could select to protect the container volume using RAID-1, RAID-5 or RAID-6. OK, let’s get started. The following steps will explain how to dynamically provision a volume with a specific storage policy.

Obviously you will need to have a PKS environment, and there are some steps on how to do this in other posts on this site. PKS provisions Kubernetes cluster that can then be used for deploying container based applications. The container based application that I am going to use is a simple Nginx web server application, and I will create a persistent container volume (PV) that can be associated with the application to create some persistent content. There are two parts to the creation of a persistent volume. The first is the StorageClass which can be used to define the intended policy for a PV which will be dynamically provisioned. Here is the sample storage class manifest (yaml) file that I created:

root@pks-cli:~/nginx# cat nginx-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nginx-storageclass
provisioner: kubernetes.io/vsphere-volume
parameters:
  diskformat: thin
  hostFailuresToTolerate: "0"
  datastore: vsanDatastore
root@pks-cli:~/nginx#

As you can see, I have selected the vsanDatastore and my policy is NumberOfFailuresToTolerate = 0. I could of course have added other policy settings, but I just wanted to see it working, so I kept it simple. You will need to note the name of the StorageClass, as you will need to use that in the claim next. Here is what my persistent volume claim manifest (yaml) file looks like:

root@pks-cli:~/nginx# cat nginx-pvc-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc-claim
  annotations:
   volume.beta.kubernetes.io/storage-class: nginx-storageclass
spec:
  accessModes:
    - ReadWriteOnce
  resources:
  requests:
    storage: 2Gi
root@pks-cli:~/nginx#

Note that the storage class created previously is referenced here. Now it is time to create the manifest/yaml file for our Nginx application. You will see in that application’s manifest file where the StorageClass and PV are referenced.

root@pks-cli:~/nginx# cat nginx-harbor-lb-pv.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
  namespace: default
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
      namespace: default
    spec:
      containers:
      - name: webserver
        image: harbor.rainpole.com/library/nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginx-storageclass
        mountPath: /test
      volumes:
      - name: nginx-storageclass
        persistentVolumeClaim:
        claimName: nginx-pvc-claim
root@pks-cli:~/nginx#

A few things to point out with this manifest. I configured a LoadBalancer so that I can point my application is easily accessible from the outside world. Also I am using Harbor for my images and I am not downloading them from an external source. Now we are ready to start deploying out application with its persistent volume. Note kubectl is the client interface to the Kubernetes cluster. It talks to the API server on the master node. This then reads the manifest/yaml file and deploys the application on the cluster.

root@pks-cli:~/nginx# kubectl create -f nginx-storageclass.yaml
storageclass "nginx-storageclass" created


root@pks-cli:~/nginx# kubectl create -f nginx-pvc-claim.yaml
persistentvolumeclaim "nginx-pvc-claim" created


root@pks-cli:~/nginx# kubectl get sc
NAME                 PROVISIONER                    AGE
nginx-storageclass   kubernetes.io/vsphere-volume   10s


root@pks-cli:~/nginx# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                     STORAGECLASS         REASON    AGE
pvc-99724675-8c10-11e8-939b-005056826ff1   2Gi        RWO            Delete           Bound     default/nginx-pvc-claim   nginx-storageclass             15s
root@pks-cli:~/nginx#

The PV is now setup. Let’s deploy our Nginx application, and get the PV mounted on /test in the container. We can also look at the kubectl describe command to see the events associated with the mount operation.

root@pks-cli:~/nginx# kubectl create -f nginx-harbor-lb-pv.yaml
service "nginx" created
deployment "nginx" created

root@pks-cli:~/nginx# kubectl describe pods
Name: nginx-65b4fcccd4-vqrxc
Namespace: default
Node: 86bfa120-77f7-4eb2-9783-87c9247da886/192.168.191.203
Start Time: Fri, 20 Jul 2018 12:50:27 +0100
Labels: app=nginx
        pod-template-hash=2160977780
Annotations: <none>
Status: Running
IP: 172.16.9.2
Controlled By: ReplicaSet/nginx-65b4fcccd4
Containers:
 webserver:
  Container ID: docker://57a3b709b1f18d60f9b4e7472c7f4c4b8657d8e233eedc25a2118740af83000b
  Image: harbor.rainpole.com/library/nginx
  Image ID: docker-pullable://harbor.rainpole.com/library/nginx@sha256:edad5e71815c79108ddbd1d42123ee13ba2d8050ad27cfa72c531986d03ee4e7
  Port: 80/TCP
  State: Running
   Started: Fri, 20 Jul 2018 12:50:31 +0100
  Ready: True
  Restart Count: 0
  Environment: <none>
  Mounts:
    /test from nginx-storageclass (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-b62vf (ro)
Conditions:
  Type Status
  Initialized True
  Ready True
  PodScheduled True
Volumes:
  nginx-storageclass:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: nginx-pvc-claim
    ReadOnly: false
  default-token-b62vf:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-b62vf
    Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled <invalid> default-scheduler Successfully assigned nginx-65b4fcccd4-vqrxc to 86bfa120-77f7-4eb 2-9783-87c9247da886
  Normal SuccessfulMountVolume <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 MountVolume.SetUp succeeded for volume "default-token-b62vf"
  Normal SuccessfulMountVolume <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 MountVolume.SetUp succeeded for volume "pvc-99724675-8c10-11e8-93 9b-005056826ff1"
  Normal Pulling <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 pulling image "harbor.rainpole.com/library/nginx"
  Normal Pulled <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 Successfully pulled image "harbor.rainpole.com/library/nginx"
  Normal Created <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 Created container
  Normal Started <invalid> kubelet, 86bfa120-77f7-4eb2-9783-87c9247da886 Started container
root@pks-cli:~/nginx#

Excellent. It does look like the volume has mounted (slide the window from left to right to see the full event output). We can now open a shell session to the container and verify.

root@pks-cli:~/nginx# kubectl get pods
NAME                   READY STATUS  RESTARTS AGE
nginx-65b4fcccd4-vqrxc 1/1   Running 0        2m

root@pks-cli:~/nginx# kubectl exec -it nginx-65b4fcccd4-vqrxc -- /bin/bash

root@nginx-65b4fcccd4-vqrxc:/# mount | grep /test
/dev/sdd on /test type ext4 (rw,relatime,data=ordered)
root@nginx-65b4fcccd4-vqrxc:/#

You will also notice a number of events taking place on vSphere at this stage. The creation of the PV requires the creation of a temporary VM, so you will notice events of this nature happening:

Once the volume is created, the PV should be visible in the kubevols folder. Since I specified the vsanDatastore in the StorageClass manifest, that is where it will appear.

Now let’s try to do something interesting to show that the data is persisted in my PV. With the shell prompt to the container application, we will do the following. We will navigate to /usr/share/nginx where the default nginx landing page is found, we will copy it to /test where our PV is mounted. We will make changes to the index.html file and save it off. Next we will stop the application, modify its manifest file so that our PV is now mounted on /usr/share/nginx/ and when the application is accessed, it should show us the modified landing page.

root@nginx-68558fb67c-mdlh5:/# cd /usr/share/nginx
root@nginx-68558fb67c-mdlh5:/usr/share/nginx# ls
html
root@nginx-68558fb67c-mdlh5:/usr/share/nginx# cp -r html/ /test/
root@nginx-68558fb67c-mdlh5:/usr/share/nginx# cd /test/html
root@nginx-68558fb67c-mdlh5:/test/html# ls
50x.html index.html

root@nginx-68558fb67c-mdlh5:/test/html# mv index.html oldindex.html
root@nginx-68558fb67c-mdlh5:/test/html# sed -e 's/nginx/cormac nginx/g' oldindex.html >> index.html
root@nginx-68558fb67c-mdlh5:/test/html# exit

root@pks-cli:~/nginx# kubectl delete -f nginx-harbor-lb-pv.yaml
service "nginx" deleted
deployment "nginx" deleted
root@pks-cli:~/nginx#

Now change the manifest file as follows.

from:

volumeMounts: 
- name: nginx-storageclass 
mountPath: /test 

to:

volumeMounts: 
- name: nginx-storageclass 
mountPath: /usr/share/nginx

 

Launch the application once more:

root@pks-cli:~/nginx# kubectl create -f nginx-harbor-lb-pv.yaml
service "nginx" created
deployment "nginx" created

 

And now the Nginx landing page should have your changes persisted, proving that it is storing data even when the application goes away.

Now, when I deployed this application, I requested that the policy should be FTT=0. How do I confirm that. First thing – let’s figure out which worker VM our application is running on. The kubectl describe pod command has the IP address in the Node field. Now we just need to check the IP address of our K8s worker VMs which were deployed by PKS. In my case, it is the VM with the name beginning with vm-617f.

For me, the easiest way now is to use RVC (old habits die hard). If I login and launch the Ruby vSphere Console on my vCenter server, then navigate to VMs, I can list all of the storage objects associated with a particular VM, in this case, my K8s worker VM. As I am not doing anything else with PVs, I expect that there will be only one. And from there I can verify the policy.

/vcsa-06/CH-Datacenter/vms/pcf_vms> ls
0 7b70b6a2-4ae5-42b1-83f9-5dc189881c99/
1 vm-25370103-3c8b-4393-ba4c-e46873515da3: poweredOn
2 vm-5e0fe3c7-5761-41b6-8ecb-1079e853378c: poweredOn
3 vm-b8bb2ff6-6e6c-4830-98d8-f6f8b62d5522: poweredOn
4 vm-6b3c049f-710d-44dd-badb-bbdef4c81ceb: poweredOn
5 vm-205d59bd-5646-4300-8a51-78fe1663b899: poweredOn
6 vm-617f8efd-b3f5-490d-8f8d-9dc9c58cea0f: poweredOn

/vcsa-06/CH-Datacenter/vms/pcf_vms> vsan.vm_object_info 6
VM vm-617f8efd-b3f5-490d-8f8d-9dc9c58cea0f:
VM has a non-vSAN datastore
Namespace directory
Couldn't find info about DOM object 'vm-617f8efd-b3f5-490d-8f8d-9dc9c58cea0f'
Disk backing: [vsanDatastore] fef94d5b-fa8b-491f-bf0a-246e962f4850/kubernetes-dynamic-pvc-99724675-8c10-11e8-939b-005056826ff1.vmdk
DOM Object: 6ec8515b-8520-fc51-dc46-246e962c2408 (v6, owner: esxi-dell-e.rainpole.com, proxy owner: None, policy: hostFailuresToTolerate = 0, CSN = 7)
Component: 6ec8515b-281b-7552-4835-246e962c2408 (state: ACTIVE (5), host: esxi-dell-g.rainpole.com, capacity: naa.500a07510f86d693, cache: naa.5001e82002675164,
votes: 1, usage: 0.0 GB, proxy component: false)
/vcsa-06/CH-Datacenter/vms/pcf_vms>

That looks good to me (again, scroll left to right to see the full output). I hope that helps you get started with our vSphere Cloud Provider for PVs on PKS (and of course native Kubernetes).

 

Caveat

During my testing, I had issues creating the PV. I kept hitting the following issues (scroll to see the full error).

root@pks-cli:~#kubectl describe pvc
Name:          cormac-slave-claim
Namespace:     default
StorageClass:  thin-disk
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim",\
               "metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"thin-disk"},"name":"cormac-slav...
               volume.beta.kubernetes.io/storage-class=thin-disk
               volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Finalizers:    []
Capacity:
Access Modes:
Events:
  Type     Reason              Age                     From                         Message
  ----     ------              ----                    ----                         -------
  Warning  ProvisioningFailed  <invalid> (x2 over 6s)  persistentvolume-controller  Failed to \
  provision volume with StorageClass "thin-disk": folder '/CH-Datacenter/vm/pcf_vms/7b70b6a2-4ae5-42b1-83f9-5dc189881c99' \
  not found
root@pks-cli:~# 

I’m unsure if this was something that I failed to do during setup, or if it is actually an issue. We’re investigating currently. However, once I manually created the folder with the ID in the warning under the folder, everything worked absolutely fine.

 

Shameless Plug

If you are planning to attend VMworld 2018, I’ll be running a whole breakout session on next-gen applications running on vSphere/vSAN, and will be paying particular attention to applications that require persistent storage. I’ll be hosting this session with our Storage and Availability CTO and VMware Fellow, Christos Karamanolis. The session is HCI1338BU and is titled HCI: The ideal operating environment for Cloud Native Applications. Hope you can make it. Along the same lines, my colleagues Frank Denneman and Michael Gasch are presenting CNA1553BU Deep Dive: The value of Running Kubernetes on vSphere. This should be another great session if Kubernetes and/or next-gen applications are your thing. Frank has done a great write-up here on these sessions and what you can expect.

One Reply to “PKS Revisited – Project Hatchway / K8s vSphere Cloud Provider review”

Comments are closed.