Portworx, STORK and container volume snapshots

As I continue on my cloud native storage journey, I found myself looking at Portworx. The reason for this was down to the fact that Portworx provide a plugin for the Heptio Velero product, and I was interested to see how this behaved on top of my vSphere on-premises infrastructure. I’ve written about Velero a few times already, and done a few posts where I leveraged the Restic plugin for snapshot functionality. Thus, I wanted to see how Portworx achieved the same thing, and wanted to learn about bit more about STORK, Portworx’s Storage Orchestrator for Kubernetes. I’ve written about Portworx already on this site, having met them at a number of events. So to begin with, I just want to look at their snapshot functionality, and later on (in another post), I’ll see if I can get it to work with Velero.

I am not going to cover the installation process. This is covered in great detail on Portworx’s own site here. What I do want to look at is STORK, and particularly the Volume Snapshot support. Since this is on-premises vSphere, I wanted to take “local” STORK snapshots. These are per volume snapshots where the snapshots are stored locally in the  Portworx cluster’s storage pools.

To begin with, this is Kubernetes v1.14.1 and Portworx version 2.0.3.4. I have created a Cassandra StatefulSet with 3 replicas deployed in its own namespace, also called Cassandra. I also have another namespace called cass-from-snap. The objective here is to snapshot the Cassandra application, and restore the snapshot(s) to a different namespace, in this case cass-from-snap. Various instructions around STORK local snapshot operations can be found on the Portworx site here, although I had to modify the cloud snap instructions slightly to work on-prem.

Cassandra StatefulSet

Here is the deployment of my Cassandra StatefulSet. As mentioned, it has 3 replicas, thus there are 3 pods and 3 persistent volumes (I’ve aliases my ‘kubectl’ command to simply ‘k’ below). Note that these PVs are using the Portworx provider, as specified in the storage Class used by these PVs.

$ k get sts
NAME      READY AGE
cassandra 3/3   21h


$ k get pods
NAME        READY STATUS  RESTARTS AGE
cassandra-0 1/1   Running 0        21h
cassandra-1 1/1   Running 0        18h
cassandra-2 1/1   Running 0        18h


$ k get pvc
NAME                       STATUS VOLUME                                   CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Bound  pvc-873a5745-717f-11e9-ac93-005056b82121 1Gi      RWO          cass-sc      21h
cassandra-data-cassandra-1 Bound  pvc-9541c6b5-7196-11e9-ac93-005056b82121 1Gi      RWO          cass-sc      18h
cassandra-data-cassandra-2 Bound  pvc-cee730f5-7196-11e9-ac93-005056b82121 1Gi      RWO          cass-sc      18h


$ k get pv
NAME                                     CAPACITY ACCESS MODES  RECLAIM POLICY STATUS CLAIM                                 STORAGECLASS REASON AGE
pvc-873a5745-717f-11e9-ac93-005056b82121 1Gi      RWO           Delete         Bound   cassandra/cassandra-data-cassandra-0 cass-sc             21h
pvc-9541c6b5-7196-11e9-ac93-005056b82121 1Gi      RWO           Delete         Bound   cassandra/cassandra-data-cassandra-1 cass-sc             18h
pvc-cee730f5-7196-11e9-ac93-005056b82121 1Gi      RWO           Delete         Bound   cassandra/cassandra-data-cassandra-2 cass-sc             18h


$ k get sc
NAME               PROVISIONER                   AGE
cass-sc            kubernetes.io/portworx-volume 21h
stork-snapshot-sc  stork-snapshot                40h

What you will note is the Storage Classes is that there is already a storage class called stork-snapshot-sc for STORK snapshots. In later versions of Portworx, the STORK functionality is included. We can describe it to get more details.

$ k describe sc stork-snapshot-sc
Name: stork-snapshot-sc
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"stork-snapshot-sc"},"provisioner":"stork-snapshot"}

Provisioner: stork-snapshot
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>

Creating the Snapshot

Now, since there are 3 PVs in my application, I am going to attempt to take a group snapshots. To do this, you need use the GroupVolumeSnapshot CRD object. Here is my YAML file to create this object. The interesting entries here are the restoreNamepsaces. These must be specified to allow snapshots to be used to create PVCs in those namespaces.

$ cat groupsnapshotspec.yaml
apiVersion: stork.libopenstorage.org/v1alpha1
kind: GroupVolumeSnapshot
metadata:
  name: cassandra-group-snapshot
  namespace: cassandra
spec:
  pvcSelector:
    matchLabels:
      app: cassandra
    restoreNamespaces:
    - cassandra
    - cass-from-snap

Let’s go ahead and create the GroupVolumeSnapshot object, which will also proceed to take snapshots of each volume that matches the app label cassandra in the specified namespace, cassandra.

$ k create -f groupsnapshotspec.yaml
groupvolumesnapshot.stork.libopenstorage.org/cassandra-group-snapshot created

$ k get groupvolumesnapshot
NAME                     AGE
cassandra-group-snapshot 46s

$ k describe groupvolumesnapshot
Name: cassandra-group-snapshot
Namespace: cassandra
Labels: <none>
Annotations: <none>
API Version: stork.libopenstorage.org/v1alpha1
Kind: GroupVolumeSnapshot
Metadata:
  Creation Timestamp: 2019-05-09T08:45:03Z
  Generation: 4
  Resource Version: 400252
  Self Link: /apis/stork.libopenstorage.org/v1alpha1/namespaces/cassandra/groupvolumesnapshots/cassandra-group-snapshot
  UID: bce263b0-7236-11e9-ac93-005056b82121
Spec:
  Max Retries: 0
  Options: <nil>
  Post Exec Rule:
  Pre Exec Rule:
  Pvc Selector:
    Match Labels:
      App: cassandra
  Restore Namespaces:
    cassandra
    cass-from-snap
Status:
  Num Retries: 0
  Stage: Final
  Status: Successful
  Volume Snapshots:
    Conditions:
      Last Transition Time: 2019-05-09T08:45:08Z
      Message: Snapshot created successfully and it is ready
      Reason:
      Status: True
      Type: Ready
    Data Source:
      Portworx Volume:
        Snapshot Id: 159121421640338900
        Snapshot Type: local
    Parent Volume ID: 1151451828065260426
    Task ID:
    Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-1-bce263b0-7236-11e9-ac93-005056b82121
    Conditions:
      Last Transition Time: 2019-05-09T08:45:08Z
      Message: Snapshot created successfully and it is ready
      Reason:
      Status: True
      Type: Ready
    Data Source:
      Portworx Volume:
        Snapshot Id: 490873347388237011
        Snapshot Type: local
    Parent Volume ID: 1020043023963546333
    Task ID:
    Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
    Conditions:
      Last Transition Time: 2019-05-09T08:45:08Z
      Message: Snapshot created successfully and it is ready
      Reason:
      Status: True
      Type: Ready
    Data Source:
      Portworx Volume:
        Snapshot Id: 566238584674927305
        Snapshot Type: local
    Parent Volume ID: 110660044012912869
    Task ID:
    Volume Snapshot Name: cassandra-group-snapshot-cassandra-data-cassandra-2-bce263b0-7236-11e9-ac93-005056b82121
Events: <none>

Good – it looks like the snapshot attempt was successful and that I now have 3 snapshots, one for each of the PVs in my Cassandra StatefulSet. We can examine each snapshot in further detail using the following commands.

$ kubectl get volumesnapshot
NAME                                                                                     AGE
cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121 10m
cassandra-group-snapshot-cassandra-data-cassandra-1-bce263b0-7236-11e9-ac93-005056b82121 10m
cassandra-group-snapshot-cassandra-data-cassandra-2-bce263b0-7236-11e9-ac93-005056b82121 10m

$ kubectl describe volumesnapshot cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
Name: cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
Namespace: cassandra
Labels: <none>
Annotations: stork/snapshot-restore-namespaces: cassandra,cass-from-snap
API Version: volumesnapshot.external-storage.k8s.io/v1
Kind: VolumeSnapshot
Metadata:
  Creation Timestamp: 2019-05-09T08:45:05Z
  Generation: 1
  Owner References:
    API Version: stork.libopenstorage.org/v1alpha1
    Kind: GroupVolumeSnapshot
    Name: cassandra-group-snapshot
    UID: bce263b0-7236-11e9-ac93-005056b82121
  Resource Version: 400248
  Self Link: /apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/cassandra/volumesnapshots/cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
  UID: be4d14b3-7236-11e9-ac93-005056b82121
Spec:
  Persistent Volume Claim Name: cassandra-data-cassandra-0
  Snapshot Data Name: cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
Status:
  Conditions:
    Last Transition Time: 2019-05-09T08:45:08Z
    Message: Snapshot created successfully and it is ready
    Reason:
    Status: True
    Type: Ready
  Creation Timestamp: <nil>
Events: <none>

Restoring a new PVC from a Snapshot

Now at this point, the next step is to try to restore these snapshots as PVCs in another namespace. Whaat we have already verified is the following:

  1. STORK is already installed
  2. The StorageClass called stork-snapshot-sc is already created. This is used for creating PVCs from snapshots.
  3. We have specified that we are allowing snapshots to be restored to other namespaces, namely cass-from-snap.
  4. We have successfully taken snapshots of our Cassandra application.

We can now go ahead and build a YAML file which details how a PVC should be restored from a snapshot. Here is what mine looks like. Of note are the annotations. These are used to specify (a) the snapshot we wish to restore as a PVC, and (b) which namespace the source snapshot resides in, if we are restoring to a different namespace, which we are.

$ cat restore-from-snap.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cassandra-clone-0
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: cassandra-group-snapshot-cassandra-data-cassandra-0-bce263b0-7236-11e9-ac93-005056b82121
    stork/snapshot-source-namespace: cassandra
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 1Gi

Let’s now switch to the destination namespace (cass-from-snap) and create the PVC from snapshot. I am using a krew plugin called change-ns to allow me to switch to a new namespace context.

$ k config get-contexts
CURRENT NAME                                         CLUSTER        AUTHINFO NAMESPACE
        cass-from-snap/portworx/kubernetes-admin-px  portworx       kubernetes-admin-px cass-from-snap
*       cassandra/portworx/kubernetes-admin-px       portworx       kubernetes-admin-px cassandra
        default/portworx/kubernetes-admin-px         portworx       kubernetes-admin-px default
        k8s-cluster-01                               k8s-cluster-01 43e6875c-feb5-4a3a-9c21-f642028701ab
        kubernetes-admin-px@portworx                 portworx       kubernetes-admin-px
        kubernetes106                                kubeadm        kubernetes-admin2
$ k change-ns cass-from-snap
namespace changed to "cass-from-snap"
$ k config get-contexts
CURRENT NAME                                         CLUSTER        AUTHINFO NAMESPACE
*       cass-from-snap/portworx/kubernetes-admin-px  portworx       kubernetes-admin-px cass-from-snap
        cassandra/portworx/kubernetes-admin-px       portworx       kubernetes-admin-px cassandra
        default/portworx/kubernetes-admin-px         portworx       kubernetes-admin-px default
        k8s-cluster-01                               k8s-cluster-01 43e6875c-feb5-4a3a-9c21-f642028701ab
        kubernetes-admin-px@portworx                 portworx       kubernetes-admin-px
        kubernetes106                                kubeadm        kubernetes-admin2
$ k get pvc
No resources found.

$ k create -f restore-from-snap.yaml
persistentvolumeclaim/cassandra-clone-0 created

$ k get pvc
NAME              STATUS   VOLUME CAPACITY ACCESS MODES STORAGECLASS      AGE
cassandra-clone-0 Pending                               stork-snapshot-sc 7s

$ k get pvc
NAME              STATUS VOLUME                                   CAPACITY ACCESS MODES STORAGECLASS      AGE
cassandra-clone-0 Bound  pvc-ae84e5a6-723b-11e9-ac93-005056b82121 1Gi      RWO          stork-snapshot-sc 29s

$ k get pv
NAME                                     CAPACITY    ACCESS MODES RECLAIM POLICY STATUS CLAIM                                STORAGECLASS REASON AGE
pvc-873a5745-717f-11e9-ac93-005056b82121 1Gi         RWO          Delete         Bound  cassandra/cassandra-data-cassandra-0 cass-sc             22h
pvc-9541c6b5-7196-11e9-ac93-005056b82121 1Gi         RWO          Delete         Bound  cassandra/cassandra-data-cassandra-1 cass-sc             19h
pvc-ae84e5a6-723b-11e9-ac93-005056b82121 1Gi         RWO          Delete         Bound  cass-from-snap/cassandra-clone-0     stork-snapshot-sc   11s
pvc-cee730f5-7196-11e9-ac93-005056b82121 1Gi         RWO          Delete         Bound  cassandra/cassandra-data-cassandra-2 cass-sc             19h

Excellent. We have now managed to use STORK to create a snapshot, and allow the backup / restore a Portworx PVC (and resulting PV) from one namespace to another. However, this isn’t really a backup. We haven’t captured any container information or metadata. Next step is to see if I can get it integrated with Heptio Velero, and do a full backup and restore of my Cassandra DB using Portworx volumes and STORK snapshot management features.

2 Replies to “Portworx, STORK and container volume snapshots”

  1. What is the exaact differences between Velero and Stork, do they both snapshot volumes?

    1. I have only looked at the ‘snapshot’ aspect of STORK – it appears it can do many other things. But in the context of what I was interested in, I just used it to create and delete snapshots of PVCs. You can read more about STORK here – https://portworx.com/stork-storage-orchestration-kubernetes/ (there is a link in the post as well).

      Velero works at a different granularity. It can be used to backup/restore PVCs, but it also captures metadata around Pods, Namespaces, Secrets, ConfigMaps, etc. Basically, it captures all parts of an application so that it can be restored. Now Velero does not have built-in functionality to snapshot PVCs – it relies on third party plugins for this. There are plugins for the different cloud provides, as well as other 3rd parties. For example, Portworx provides a plugin to Velero to do snapshots when Velero initiates a backup. There is also a plugin from restic which works on vSphere for capturing PV contents.

Comments are closed.