Site icon CormacHogan.com

Fun with PKS, K8s, VCP, StatefulSets and Couchbase

After just deploying the newest version of Pivotal Container Services (PKS) and rolling out my first Kubernetes cluster (read all about it here), I wanted to try to do something a bit more interesting than just create another persistent volume claim to test out our vSphere Cloud Provider since I had done this  a number of times already. Thanks to some of the work I have been doing with our cloud native team, I was introduced to StatefulSets. That peaked my interest a little, as I had not come across them before.

I guess before we do anything else, we should talk about StatefulSets, which are a relatively newish construct in Kubernetes. These are very similar to ReplicaSets, in so far as they define clones of an application (or a set of pods). However StatefulSets have been introduced to deal with, well Stateful applications. StatefulSets differ from RelicaSets in a few ways. While both deal with replica copies or clones of pods, StatefulSets incrementally number the replica pods, starting with 0, and will increment the pod name with a 1 extension for the next copy, 2 for the next, and do on.  ReplicaSets identifiers were very arbitrary, so you could not easily tell which was the initial copy and which was the newest. StatefulSets also guarantee that the first pod (pod 0) will be online and healthy before creating any clone/replica. When scaling back an application, StatefulSets remove the highest numbered one first. We shall see some of this behaviour later on. There is an excellent write-up on StatefulSets and how they relate to ReplicaSets  in the free Managing Kubernetes ebook (from my new colleagues over at Heptio).

To see this in action, I am going to use Couchbase. Couchbase is an open-source, distributed (shared-nothing architecture)  NoSQL database. And it is of course stateful, so perfect for a StatefulSet. Fortunately for me, someone has already gone to the effort of making a containerized Couchbase for K8s so kudos to them for that. The only items I need to create in K8s are the storage class YAML file, a Couchbase service YAML file so I can access the application on the network, and the StatefulSet YAML file. I was lucky once again as our team had already built these out, so there wasn’t much for me to do to get it all up and running.

Let’s take a look at the YAML files first.

Storage Class

If you’ve read my previous blogs on K8s and the vSphere Cloud Provider (VCP), this should be familiar to you. The provisioner is our vSphere Cloud Provider – called kubernetes.io/vsphere-volume. Of interest here is of course the storagePolicyName parameter, which reference a policy called “gold”. This is a storage policy created via SPBM, the Storage Policy Based Management framework that we have in vSphere. This policy must be created on my vSphere environment – there is no way for someone to do this from within K8s. I built this “gold” policy on my vsanDatastore to create a RAID-1 volume. The resulting VMDK is automatically placed in a folder called kubevols on that datastore. The rest of the logic around building the container volume/VMDK is taken care of by the provider.

cormac@pks-cli:~/Stateful-Demo$ cat couchbase-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: couchbasesc
provisioner: kubernetes.io/vsphere-volume
parameters:
    diskformat: thin
    storagePolicyName:gold
 
Next thing to look at is the Couchbase service YAML file. The service provides a networking endpoint for an application, or to be more precise, a set of one or more pods. This is core K8s stuff – if a pod dies and is replaced with a new pod. Through the use of a service, we don’t need to worry about the IP addresses on the pods. The service takes care of this, handling pods dying and new pods being created. A service is connected to the application/pod(s) through the use of labels. Since the type is LoadBalancer, the service will load the requests across all the Pods that make up the application.
 
Service
cormac@pks-cli:~/Stateful-Demo$ cat couchbase-service.yaml
apiVersion: v1
kind: Service
metadata:
  name:couchbase
  labels:
    app: couchbase
spec:
  ports:
  – port: 8091
    name: couchbase
  # *.couchbase.default.svc.cluster.local
  clusterIP: None
  selector:
    app: couchbase
apiVersion: v1
kind: Service
metadata:
  name:couchbase-ui
  labels:
    app: couchbase-ui
spec:
  ports:
  – port: 8091
    name: couchbase
  selector:
    app: couchbase
  sessionAffinity: ClientIP
  type: LoadBalancer
 
Last but not least, here is the StatefulSet, which initially has been configured for a single Pod deployment. You can see the number of replicas currently set to 1, as well as some specification around the size and access of the persistent volume in the spec request in the volumeClaimTemplate portion of the YAML. Note the use of the same label as seen in the service YAML. There is also a reference to the storage class. And of course, it references the containerized Couchbase application, which I have pulled down from the external repository and placed in my own Harbor repository, and which I could then scan for any anomalies. Fortunately, the scan passed with no issue.
 
StatefulSet
cormac@pks-cli:~/Stateful-Demo$ cat couchbase-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: couchbase
spec:
  serviceName: “couchbase”
  replicas: 1
  template:
    metadata:
      labels:
        app: couchbase
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      – name: couchbase
        image: harbor.rainpole.com/pks_project/couchbase:k8s-petset
        ports:
        – containerPort: 8091
        volumeMounts:
        – name: couchbase-data
          mountPath: /opt/couchbase/var
        env:
          – name: COUCHBASE_MASTER
            value: “couchbase-0.couchbase.default.svc.cluster.local”
          – name: AUTO_REBALANCE
            value: “false”
  volumeClaimTemplates:
  – metadata:
      name: couchbase-data
      annotations:
        volume.beta.kubernetes.io/storage-class:couchbasesc
    spec:
      accessModes: [ “ReadWriteOnce” ]
      resources:
        requests:
          storage: 1Gi
 
Start the deployment
The deployment was pretty straight forward. I use kubectl to deploy the storage class, the service and finally the StatefulSet.
 
Now I did encounter one issue – I’m not sure why, but a directory needed for creating the persistent volumes on vSphere did not exist. The behaviour was that my persistent volumes were not being created. I found the reason when I did a kubectl describe on my persistent volume claim.
 
cormac@pks-cli:~/Stateful-Demo$ kubectl describe pvc
Name:          couchbase-data-couchbase-0
Namespace:     default
StorageClass:  couchbasesc
Status:        Bound
Volume:        pvc-11af3bf5-eda6-11e8-b31f-005056823d0c
Labels:        app=couchbase
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-class=couchbasesc
               volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
Events:
  Type     Reason                 Age              From                         Message
  —-     ——                 —-             —-                         ——-
  Warning  ProvisioningFailed     8m (x3 over 9m)  persistentvolume-controller  Failed to provision volume with StorageClass “couchbasesc”: folder ‘/CH-Datacenter/vm/pcf_vms/f74b47da-1b9d-4978-89cd-36bf7789f6bf’ not found
 
As highlighted in red above, the folder was not found. I manually created the aforementioned folder, and then my persistent volume was successfully created. Next, I checked the events related to my pod, by running a kubectl describe on that, and everything seemed to be working.
 
cormac@pks-cli:~/Stateful-Demo$ kubectl describe pods
Name:               couchbase-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               317c87f9-8923-4630-978f-df73125d01f3/192.50.0.145
Start Time:         Thu, 22 Nov 2018 10:01:13 +0000
Labels:             app=couchbase
                    controller-revision-hash=couchbase-558dfddf8
                    statefulset.kubernetes.io/pod-name=couchbase-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/couchbase
Containers:
  couchbase:
    Container ID:
    Image:          harbor.rainpole.com/pks_project/couchbase:k8s-petset
    Image ID:
    Port:           8091/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      COUCHBASE_MASTER:  couchbase-0.couchbase.default.svc.cluster.local
      AUTO_REBALANCE:    false
    Mounts:
      /opt/couchbase/var from couchbase-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4prv9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  couchbase-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  couchbase-data-couchbase-0
    ReadOnly:   false
  default-token-4prv9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4prv9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                           Message
  —-    ——     —-  —-                                           ——-
  Normal  Scheduled  17s   default-scheduler                              Successfully assigned default/couchbase-0 to 317c87f9-8923-4630-978f-df73125d01f3
  Normal  Pulling    12s   kubelet, 317c87f9-8923-4630-978f-df73125d01f3  pulling image “harbor.rainpole.com/pks_project/couchbase:k8s-petset”
  Normal  Pulled     0s    kubelet, 317c87f9-8923-4630-978f-df73125d01f3  Successfully pulled image “harbor.rainpole.com/pks_project/couchbase:k8s-petset”
  Normal  Created    0s    kubelet, 317c87f9-8923-4630-978f-df73125d01f3  Created container
  Normal  Started    0s    kubelet, 317c87f9-8923-4630-978f-df73125d01f3  Started container
cormac@pks-cli:~/Stateful-Demo$
 
So far, so good. Now in the previous output, I also highlighted an IP address which appeared in the Node: field. This is the K8s nodes on which the pod is running. In order to access the Couchbase UI, I need an IP address from one of the K8s nodes and the port to which the Couchbase container’s port has been mapped. This is how I get that port info.
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get svc
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
couchbase      ClusterIP      None                    8091/TCP         18h
couchbase-ui   LoadBalancer   10.100.200.36        8091:32691/TCP   18h
kubernetes     ClusterIP      10.100.200.1            443/TCP          18h
 
And now if I point my browser to that IP address and that port (in my case 192.50.0.145:32691), I should get the Couchbase UI. In fact, I should be able to connect to any of the K8s nodes, and the service should redirect me to any node that is running a Pod for this application. Once I see the login prompt, I need to provide some Couchbase login credentials (this app was built with Administrator/password credentials), and once I login, I should see my current deployment of 1 active server, which is correct since I have only a single Replica requested in the StatefulSet YAML file.
 
Scaling out
Again, so far so good. Now lets scale out the application from a single replica to 3 replicas. How would I do that with a StatefulSet? It can all be done via kubectl. Let’s look at the current StatefulSet, and then scale it out. In the first output, you can see that the Replicas is 1.
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get statefulset
NAME        DESIRED   CURRENT   AGE
couchbase   1         1         17m
 
cormac@pks-cli:~/Stateful-Demo$ kubectl describe statefulset
Name:               couchbase
Namespace:          default
CreationTimestamp:  Thu, 22 Nov 2018 10:01:13 +0000
Selector:           app=couchbase
Labels:             app=couchbase
Annotations:        <none>
Replicas:           1 desired | 1 total
Update Strategy:    OnDelete
Pods Status:        1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=couchbase
  Containers:
   couchbase:
    Image:      harbor.rainpole.com/pks_project/couchbase:k8s-petset
    Port:       8091/TCP
    Host Port:  0/TCP
    Environment:
      COUCHBASE_MASTER:  couchbase-0.couchbase.default.svc.cluster.local
      AUTO_REBALANCE:    false
    Mounts:
      /opt/couchbase/var from couchbase-data (rw)
  Volumes:  <none>
Volume Claims:
  Name:          couchbase-data
  StorageClass:  couchbasesc
  Labels:        <none>
  Annotations:   volume.beta.kubernetes.io/storage-class=couchbasesc
  Capacity:      1Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  —-    ——            —-  —-                    ——-
  Normal  SuccessfulCreate  17m   statefulset-controller  create Pod couchbase-0 in StatefulSet couchbase successful
cormac@pks-cli:~/Stateful-Demo$
 

Let’s now go ahead and increase the number of replicas to 3. Here we should not only observe the number of pods increasing (using the incremental numbering scheme mentioned in the introduction), but we should also see the number of persistent volumes begin to increment as well. Let’s look at that next. I’ll run the kubectl get commands a few times so you can see the pods and PV numbers increment gradually.

 
cormac@pks-cli:~/Stateful-Demo$ kubectl scale statefulset couchbase –replicas=3
statefulset.apps/couchbase scaled
cormac@pks-cli:~/Stateful-Demo$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                STORAGECLASS   REASON    AGE
pvc-11af3bf5-eda6-11e8-b31f-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-0   couchbasesc              18h
pvc-38e52631-ee40-11e8-981a-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-1   couchbasesc              2s
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get pods
NAME          READY     STATUS              RESTARTS   AGE
couchbase-0   1/1       Running             0          19m
couchbase-1   0/1       ContainerCreating   0          16s
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                STORAGECLASS   REASON    AGE
pvc-11af3bf5-eda6-11e8-b31f-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-0   couchbasesc              18h
pvc-38e52631-ee40-11e8-981a-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-1   couchbasesc              31s
pvc-48723d25-ee40-11e8-981a-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-2   couchbasesc              5s
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get pods
NAME          READY     STATUS              RESTARTS   AGE
couchbase-0   1/1       Running             0          19m
couchbase-1   1/1       Running             0          41s
couchbase-2   0/1       ContainerCreating   0          15s
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                STORAGECLASS   REASON    AGE
pvc-11af3bf5-eda6-11e8-b31f-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-0   couchbasesc              18h
pvc-38e52631-ee40-11e8-981a-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-1   couchbasesc              2m
pvc-48723d25-ee40-11e8-981a-005056823d0c   1Gi        RWO            Delete           Bound     default/couchbase-data-couchbase-2   couchbasesc              1m
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
couchbase-0   1/1       Running   0          21m
couchbase-1   1/1       Running   0          2m
couchbase-2   1/1       Running   0          2m
 
Let’s take a look at the StatefulSet before going back to the Couchbase UI to see what has happened there. We can now see that the number of replicas has indeed increased, and the events at the end of the output show what has just happened.
 
cormac@pks-cli:~/Stateful-Demo$ kubectl get statefulset
NAME        DESIRED   CURRENT   AGE
couchbase   3         3         21m
 
cormac@pks-cli:~/Stateful-Demo$ kubectl describe statefulset
Name:               couchbase
Namespace:          default
CreationTimestamp:  Thu, 22 Nov 2018 10:01:13 +0000
Selector:           app=couchbase
Labels:             app=couchbase
Annotations:        <none>
Replicas:           3 desired | 3 total
Update Strategy:    OnDelete
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=couchbase
  Containers:
   couchbase:
    Image:      harbor.rainpole.com/pks_project/couchbase:k8s-petset
    Port:       8091/TCP
    Host Port:  0/TCP
    Environment:
      COUCHBASE_MASTER:  couchbase-0.couchbase.default.svc.cluster.local
      AUTO_REBALANCE:    false
    Mounts:
      /opt/couchbase/var from couchbase-data (rw)
  Volumes:  <none>
Volume Claims:
  Name:          couchbase-data
  StorageClass:  couchbasesc
  Labels:        <none>
  Annotations:   volume.beta.kubernetes.io/storage-class=couchbasesc
  Capacity:      1Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  —-    ——            —-  —-                    ——-
  Normal  SuccessfulCreate  21m   statefulset-controller  create Pod couchbase-0 in StatefulSet couchbase successful
  Normal  SuccessfulCreate  2m    statefulset-controller  create Claim couchbase-data-couchbase-1 Pod couchbase-1 in StatefulSet couchbase success
  Normal  SuccessfulCreate  2m    statefulset-controller  create Pod couchbase-1 in StatefulSet couchbase successful
  Normal  SuccessfulCreate  2m    statefulset-controller  create Claim couchbase-data-couchbase-2 Pod couchbase-2 in StatefulSet couchbase success
  Normal  SuccessfulCreate  2m    statefulset-controller  create Pod couchbase-2 in StatefulSet couchbase successful
cormac@pks-cli:~/Stateful-Demo$
 
OK, our final step is to check the application. For that we go back to the Couchbase UI and take a look at the “servers”. The first thing we notice is that there are now 2 new servers that are Pending Rebalance, as shown in the lower right hand corner of the UI.
 
 
When we click on it, we are taken to the Server Nodes view – Pending Rebalance. Now, not only do we see an option to Rebalance, but we also have a failover warning stating that at least two servers with the data service are required to provide replication.
 
 
Let’s click on the Rebalance button next. This will kick of the Rebalance activity across all 3 nodes.
 
 
And finally, our Couchbase database should be balanced across all 3 nodes, alongside the option of Fail Over.
 
 
Conclusion
So that was pretty seamless, wasn’t it? Hopefully that has given you a good idea about the purpose of StatefulSets. As well as that, hopefully you can see how nicely it integrates with the vSphere Cloud Provider (VCP) to give persistent volumes on vSphere storage for Kubernetes containerized applications.
And finally, just to show you that these volumes are on the vSAN datastore (the datastore that matches the “gold” policy in the storage class), here are the 3 volumes (VMDKs) in the kubevols folder.
 
Exit mobile version