Velero Revisited – Manually backing up VKS clusters using Velero
It’s been some time since I looked at how to use the Velero CLI to backup and restore some modern applications running in a Kubernetes cluster. However, after publishing how to use the new VKS Manager (VKSM) Data Protection (DP) earlier this week, it was mentioned that many customers who are on their VCF 9.x journey and who are using the Supervisor and vSphere Kubernetes Service to deploy VKS clusters, have not yet deployed VCF Automation into their VCF stack. This means that they do not have VKSM DP available to them just yet. So the question was whether or not the Velero CLI could be used manually to take backups and do restores of VKS workloads. The answer is of course yes, and in this blog I will show how to deploy Velero (client and server) that will allow you to backup your modern app workloads running in VKS. We will be using the new and very powerful VCF CLI to do some of these tasks.
Deploy the Velero Client/CLI
Step 1 in the process is to deploy the Velero Client. This involves downloading a zipped up Velero binary to your desktop, and then placing it somewhere in your execution path (full instructions here). I downloaded it to an Ubuntu VM that I have in my environment. Once the Velero Client/CLI has been installed, you can check what the version is by using the command ‘velero version’. Ignore the server error. This is simply because we have not yet installed the server part of Velero onto the VKS cluster. However, there client version is 1.17.0. We should match this with the server version on the VKS cluster.
$ velero version
Client:
Version: v1.17.0_vmware.1
Git commit: 3172d9f99c3d501aad9ddfac8176d783f7692dce-modified
<error getting server version: unable to retrieve the complete list of server APIs: velero.io/v1: no matches for velero.io/v1, Resource=>
Deploy the Velero Server onto VKS
To being with, ensure that your KUBECONFIG is set up correctly, and that you are pointing to the correct VKS cluster, i.e., the one where you wish to install the Velero server. I usually check by running a kubectl get nodes to verify that I have the correct cluster context.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes-cluster-dkpp-hkpfp-r6j66 Ready control-plane 41m v1.33.3+vmware.1-fips
kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 Ready <none> 36m v1.33.3+vmware.1-fips
Create a velero-data-values.yaml file
A yaml file is required to describe the details of the backup storage location (bsl). In this example, I am once again using MinIO to provide me with an S3 compatible bucket but any S3 compatible bucket will suffice. I have create a bucket called “velero-backups”. Since I have TLS enabled, I am using an https URL. This means I need to provider a Certificate Authority in the manifest, as shown below. This provides trust between Velero and the object store provider. I also need to provide credentials to access the object store.
$ cat velero-data-values.yaml backupStorageLocation: bucket: velero-backups config: region: "minio" s3ForcePathStyle: "true" s3Url: "https://minio.rainpole.io:9000" caCert: | -----BEGIN CERTIFICATE----- MIIGFzCCA/+gAwIBAgICEBkwDQYJKoZIhvcNAQELBQAwgY0xEDAOBgNVBAMMB01J Tk9fQ0ExCzAJBgNVBAYTAklFMQ0wCwYDVQQIDARDb3JrMQ0wCwYDVQQHDARDb3Jr MRUwEwYDVQQKDAxWQ0YgRGl2aXNpb24xDTALBgNVBAsMBE9DVE8xKDAmBgkqhkiG ---> snip G+aB3AHlbfw45mkMlsk9jXl1sj21UYw/ZHJN -----END CERTIFICATE----- credential: | [default] aws_access_key_id=admin aws_secret_access_key=password
Use the VCF CLI to add a package repo to the cluster
Our next step is to use the VCF CLI (my binary is called vcf-cli) to create a package repo. The VCF CLI can be downloaded from any Supervisor Namespace Summary page. The repo, once created, will contain our Velero package which we can install once downloaded. The packages are retrieved from the VKS Standard Packages Repository. I mentioned the VCF CLI a few times recently, such as using it to troubleshoot DSM deployed on VKS clusters. It is a very useful tool for VCF administrators as it allow command line interaction with the Sueprvisor and VCF Automation. Before we create the repo, let’s first ensure that we are pulling down the latest packages. This URL for the VKS Release Notes will tell you which path to use for the packages. At the time of writing, VKS 3.5 was the latest release, so the RN reports that the VKS Standard Packages v3.5.0+20251218 repository is available here:
projects.packages.broadcom.com/vsphere/supervisor/vks-standard-packages/3.5.0-20251218/vks-standard-packages:3.5.0-20251218
Always check the Release Notes for the latest path information. We will use this path when we create the repo. Note that I am still in the KUBECONFIG context. Let’s add the repository. I am placing it in the namespace tkg-system.
$ vcf-cli package repository add standard-package-repo \ --url projects.packages.broadcom.com/vsphere/supervisor/packages/2025.10.22/vks-standard-packages:3.5.0-20251022 \ -n tkg-system 1:26:48PM: Updating package repository resource 'standard-package-repo' in namespace 'tkg-system' 1:26:48PM: Waiting for package repository reconciliation for 'standard-package-repo' 1:26:50PM: Fetching | apiVersion: vendir.k14s.io/v1alpha1 | directories: | - contents: | - imgpkgBundle: | image: projects.packages.broadcom.com/vsphere/supervisor/packages/2025.10.22/vks-standard-packages@sha256:36b48ba005e884586512c2fda8c4598f426c6f78efa7f84f5b24087b49a6b52d | tag: 3.5.0-20251022 | path: . | path: "0" | kind: LockConfig | 1:26:50PM: Fetch succeeded 1:26:52PM: Template succeeded 1:26:52PM: Deploy started (3s ago) 1:26:56PM: Deploying | Target cluster 'https://10.96.0.1:443' | Changes | Namespace Name Kind Age Op Op st. Wait to Rs Ri | tkg-system ako.kubernetes.vmware.com PackageMetadata - create fallback on update or noop - - - | ^ ako.kubernetes.vmware.com.1.13.4+vmware.1-vks.1 Package - create fallback on update or noop - - - | ^ autoscaler.kubernetes.vmware.com PackageMetadata 9m delete - - ok - | ^ cert-manager.kubernetes.vmware.com.1.17.2+vmware.1-vks.1 Package 9m delete - - ok - | ^ cert-manager.kubernetes.vmware.com.1.17.2+vmware.2-vks.1 Package - create fallback on update or noop - - - | ^ cert-manager.kubernetes.vmware.com.1.18.2+vmware.1-vks.1 Package 9m delete - - ok - | ^ cert-manager.kubernetes.vmware.com.1.18.2+vmware.2-vks.2 Package - create fallback on update or noop - - - . . .
Let’s check if the Velero package is now included in the repository, and if so, what versions are available.
$ vcf-cli package available list -n tkg-system | grep velero velero.kubernetes.vmware.com velero $ vcf-cli package available get velero.kubernetes.vmware.com NAME: velero.kubernetes.vmware.com DISPLAY-NAME: velero CATEGORIES: - data protection SHORT-DESCRIPTION: Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. LONG-DESCRIPTION: Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. PROVIDER: VMware MAINTAINERS: - name: Wenkai Yin SUPPORT-DESCRIPTION: https://github.com/vmware-tanzu/velero VERSION RELEASED-AT 1.16.1+vmware.1-vks.1 2025-05-19 12:30:00 +0000 UTC 1.16.2+vmware.1-vks.1 2025-08-05 12:30:00 +0000 UTC 1.17.0+vmware.1-vks.1 2025-09-16 11:30:00 +0000 UTC
Looks good. Velero version 1.17.0 is an available package version, which will match the client version we are using.
Install the Velero Server Package
Let’s proceed with the install of the Velero server package. Specify the namespace of the repo where the package is stored, the name of the package itself, the version you wish to install and the path to the velero-data-values.yaml created earlier. The actual Velero server components will be installed from the package into a namespace called velero.
$ vcf-cli package install velero --namespace tkg-system \ --package velero.kubernetes.vmware.com \ --version 1.17.0+vmware.1-vks.1 \ --values-file velero-data-values.yaml 1:30:07PM: Pausing reconciliation for package installation 'velero' in namespace 'tkg-system' 1:30:08PM: Updating secret 'velero-tkg-system-values' 1:30:08PM: Resuming reconciliation for package installation 'velero' in namespace 'tkg-system' 1:30:08PM: Waiting for PackageInstall reconciliation for 'velero' 1:30:08PM: Waiting for generation 3 to be observed 1:30:09PM: Fetch started 1:30:09PM: Fetching | apiVersion: vendir.k14s.io/v1alpha1 | directories: | - contents: | - imgpkgBundle: | image: projects.packages.broadcom.com/vsphere/supervisor/packages/2025.10.22/vks-standard-packages@sha256:5dd00ce6284efa836cae4abb351ab8987cf118f79d355c84ce2ba0a5ac5fbd29 | path: . | path: "0" | kind: LockConfig | 1:30:09PM: Fetch succeeded 1:30:09PM: Template succeeded 1:30:10PM: Deploy started 1:30:10PM: Deploying | Target cluster 'https://10.96.0.1:443' (nodes: kubernetes-cluster-dkpp-hkpfp-r6j66, 1+) | Changes | Namespace Name Kind Age Op Op st. Wait to Rs Ri | Op: 0 create, 0 delete, 0 update, 0 noop, 0 exists | Wait to: 0 reconcile, 0 delete, 0 noop | Succeeded 1:30:10PM: Deploy succeeded
Check the Velero install status
We can now check if the package installed successfully. We can use kubectl to check on various objects in the velero namespace, such as deployments, replicaSets and Pods. We can check that the BackupStorageLocation (bsl) is available and by using the velero version command, we can verify that the server portion is now reporting correctly.
$ kubectl get deploy -n velero NAME READY UP-TO-DATE AVAILABLE AGE velero 1/1 1 1 4m38s $ kubectl get rs -n velero NAME DESIRED CURRENT READY AGE velero-845bfc6654 1 1 1 4m59s $ kubectl get pods -n velero NAME READY STATUS RESTARTS AGE node-agent-f8qdp 1/1 Running 0 5m5s velero-845bfc6654-69vqs 1/1 Running 0 5m5s $ kubectl get bsl -n velero NAME PHASE LAST VALIDATED AGE DEFAULT default Available 49s 5m22s true $ velero version Client: Version: v1.17.0_vmware.1 Git commit: 3172d9f99c3d501aad9ddfac8176d783f7692dce-modified Server: Version: v1.17.0_vmware.1
Create a sample workload to backup and restore
To test out Velero backup and restore, I am going to use a simple Pod and PersistentVolumeClaim (pvc) combination. The PVC will have to use one of the existing StorageClasses (sc) in the VKS cluster. The Pod will also mount the volume onto the /demo folder in the busybox container.
As this is VKS on vSphere, the vSphere CSI driver is a necessary component for creating persistent volumes on vSphere storage. This will also mean that during the backup, Velero will request the creation of a CSI snapshot of the volume to backup the volume. VKS clusters on vSphere have all of the necessary vSphere CSI components to achieve this.
Here are the manifests and steps to create the simple app. First we create a namespace called cormac-ns, which is where our app will live. Then we get the StorageClasses, and then build the appropriate PVC manifest using one of the storage classes. Once the PVC is created, we can reference it as a Volume in the Pod.
$ kubectl create ns cormac-ns namespace/cormac-ns created $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE sfo-w01-cl01-optimal-datastore-default-policy-raid1 csi.vsphere.vmware.com Delete Immediate true 25h sfo-w01-cl01-optimal-datastore-default-policy-raid1-latebinding csi.vsphere.vmware.com Delete WaitForFirstConsumer true 25h $ cat example-pvc-cormac-ns.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-cormac-block-pvc namespace: cormac-ns spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: sfo-w01-cl01-optimal-datastore-default-policy-raid1 $ kubectl apply -f example-pvc-cormac-ns.yaml persistentvolumeclaim/example-cormac-block-pvc created $ kubectl get pvc -n cormac-ns -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE example-cormac-block-pvc Pending sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 4s example-cormac-block-pvc Pending pvc-85cf01bc-52fb-4bc1-a74f-58990b2988bb 0 sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 4s example-cormac-block-pvc Bound pvc-85cf01bc-52fb-4bc1-a74f-58990b2988bb 5Gi RWO sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 5s $ cat cormac-pod-with-volume.yaml apiVersion: v1 kind: Pod metadata: name: cormac-pod-1 namespace: cormac-ns spec: securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 supplementalGroups: [4000] containers: - name: busybox image: "dockerhub.packages.vcfd.broadcom.net/busybox:latest" command: [ "sleep", "1000000" ] securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true runAsUser: 1000 seccompProfile: type: "RuntimeDefault" volumeMounts: - mountPath: "/demo" name: demo-vol volumes: - name: demo-vol persistentVolumeClaim: claimName: example-cormac-block-pvc $ kubectl apply -f cormac-pod-with-volume.yaml pod/cormac-pod-1 created $ kubectl get pods -n cormac-ns -w NAME READY STATUS RESTARTS AGE cormac-pod-1 0/1 ContainerCreating 0 8s cormac-pod-1 1/1 Running 0 14s $ kubectl get pod,pvc -n cormac-ns NAME READY STATUS RESTARTS AGE pod/cormac-pod-1 1/1 Running 0 33s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/example-cormac-block-pvc Bound pvc-85cf01bc-52fb-4bc1-a74f-58990b2988bb 5Gi RWO sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 2m15s
We now have an application. If you so wish, you can use kubectl exec to access the Pod, and create files and directories in the /demo directory. You can then check if the data is recovered after a backup and restore. Let’s now see if we can backup and restore this ‘critical’ application.
Backup using Velero CLI
I will now initiate a backup. I do not want to backup the full VKS cluster, only the namespace cormac-ns as this is where my app is running. After the backup has completed, I can use the suggested ‘velero backup describe‘ command to look at it in more detail.
$ velero backup create cormac-backup-1 --include-namespaces cormac-ns --wait Backup request "cormac-backup-1" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. ...................................................................... Backup completed with status: Completed. You may check for more information using the commands `velero backup describe cormac-backup-1` and `velero backup logs cormac-backup-1`. $ velero backup describe cormac-backup-1 Name: cormac-backup-1 Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3+vmware.1-fips velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: cormac-ns Excluded: <none> Resources: Included cluster-scoped: <none> Excluded cluster-scoped: volumesnapshotcontents.snapshot.storage.k8s.io Included namespace-scoped: * Excluded namespace-scoped: volumesnapshots.snapshot.storage.k8s.io Label selector: <none> Or label selector: <none> Storage Location: default Velero-Native Snapshot PVs: auto File System Backup (Default): false Snapshot Move Data: true Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2026-01-14 13:42:45 +0000 UTC Completed: 2026-01-14 13:43:55 +0000 UTC Expiration: 2026-02-13 13:42:44 +0000 UTC Total items to be backed up: 65 Items backed up: 65 Backup Item Operations: 1 of 1 completed successfully, 0 failed (specify --details for more information) Backup Volumes: Velero-Native Snapshots: <none included> CSI Snapshots: cormac-ns/example-cormac-block-pvc: Data Movement: included, specify --details for more information Pod Volume Backups: <none included> HooksAttempted: 0 HooksFailed: 0
Firstly, it looks like the backup has completed successfully. Secondly, it has indeed used CSI Snapshots, and moved the snapshot data to my backup storage location (S3 bucket). Note that this section of the output recommends using a –details options to see more information above the data movement. Let’s run that now (note that I have snipped some of the output for conciseness).
$ velero backup describe cormac-backup-1 --details Name: cormac-backup-1 Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/resource-timeout=10m0s velero.io/source-cluster-k8s-gitversion=v1.33.3+vmware.1-fips velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=33 Phase: Completed Namespaces: Included: cormac-ns Excluded: <none> Resources: Included cluster-scoped: <none> Excluded cluster-scoped: volumesnapshotcontents.snapshot.storage.k8s.io Included namespace-scoped: * Excluded namespace-scoped: volumesnapshots.snapshot.storage.k8s.io Label selector: <none> Or label selector: <none> Storage Location: default Velero-Native Snapshot PVs: auto File System Backup (Default): false Snapshot Move Data: true Data Mover: velero TTL: 720h0m0s CSISnapshotTimeout: 10m0s ItemOperationTimeout: 4h0m0s Hooks: <none> Backup Format Version: 1.1.0 Started: 2026-01-14 13:42:45 +0000 UTC Completed: 2026-01-14 13:43:55 +0000 UTC Expiration: 2026-02-13 13:42:44 +0000 UTC Total items to be backed up: 65 Items backed up: 65 Backup Item Operations: Operation for persistentvolumeclaims cormac-ns/example-cormac-block-pvc: Backup Item Action Plugin: velero.io/csi-pvc-backupper Operation ID: du-f577e717-58bf-453b-8de8-c5f3b866855e.85cf01bc-52fb-4bcd93615 Items to Update: datauploads.velero.io velero/cormac-backup-1-mswhk Phase: Completed Progress description: Completed Created: 2026-01-14 13:43:01 +0000 UTC Started: 2026-01-14 13:43:20 +0000 UTC Updated: 2026-01-14 13:43:49 +0000 UTC Resource List: data.packaging.carvel.dev/v1alpha1/Package: - cormac-ns/ako.kubernetes.vmware.com.1.13.4+vmware.1-vks.1 - cormac-ns/cert-manager.kubernetes.vmware.com.1.17.2+vmware.2-vks.1 - cormac-ns/cert-manager.kubernetes.vmware.com.1.18.2+vmware.2-vks.2 <--snip--> data.packaging.carvel.dev/v1alpha1/PackageMetadata: - cormac-ns/ako.kubernetes.vmware.com - cormac-ns/cert-manager.kubernetes.vmware.com - cormac-ns/cluster-autoscaler.kubernetes.vmware.com <--snip--> v1/ConfigMap: - cormac-ns/kube-root-ca.crt v1/Event: - cormac-ns/cormac-pod-1.188a9ca69877b51d - cormac-ns/cormac-pod-1.188a9ca70666357e - cormac-ns/cormac-pod-1.188a9ca9254c4de3 - cormac-ns/cormac-pod-1.188a9ca963e97572 - cormac-ns/cormac-pod-1.188a9ca96cda3faf - cormac-ns/cormac-pod-1.188a9ca974af3290 - cormac-ns/example-cormac-block-pvc.188a9c7c780cab0a - cormac-ns/example-cormac-block-pvc.188a9c8eb1bdf906 - cormac-ns/example-cormac-block-pvc.188a9c8eb1e499a8 - cormac-ns/example-cormac-block-pvc.188a9c8fc1b87d38 v1/Namespace: - cormac-ns v1/PersistentVolume: - pvc-85cf01bc-52fb-4bc1-a74f-58990b2988bb v1/PersistentVolumeClaim: - cormac-ns/example-cormac-block-pvc v1/Pod: - cormac-ns/cormac-pod-1 v1/ServiceAccount: - cormac-ns/default Backup Volumes: Velero-Native Snapshots: <none included> CSI Snapshots: cormac-ns/example-cormac-block-pvc: Data Movement: Operation ID: du-f577e717-58bf-453b-8de8-c5f3b866855e.85cf01bc-52fb-4bcd93615 Data Mover: velero Uploader Type: kopia Moved data Size (bytes): 0 Result: succeeded Pod Volume Backups: <none included> HooksAttempted: 0 HooksFailed: 0
Now we can see even more details about the data movement. We can see that it is using the in-built velero data mover which is using Kopia. And we can in fact see more details about the snapshot and data movement by querying some other custom resources, as we will see next.
Backup: Under the covers
The following commands will examine some of the snapshot details, such as the snapshot location and class. You can add a “-o yaml” to the kubectl get commands, or use kubectl describe, to see more details. But it is the “dataupload” Custom Resource (CR) that is most useful. This is what ties the volume and resulting CSI snapshot tot he data mover and backup storage location, and if there are any issues with that operation, this is the CR to check.
$ kubectl get volumesnapshotlocations -n velero NAME AGE default 22m $ kubectl get volumesnapshotclasses NAME DRIVER DELETIONPOLICY AGE volumesnapshotclass-delete csi.vsphere.vmware.com Delete 80m $ kubectl describe datauploads cormac-backup-1-mswhk -n velero Name: cormac-backup-1-mswhk Namespace: velero Labels: velero.io/async-operation-id=du-f577e717-58bf-453b-8de8-c5f3b866855e.85cf01bc-52fb-4bcd93615 velero.io/backup-name=cormac-backup-1 velero.io/backup-uid=f577e717-58bf-453b-8de8-c5f3b866855e velero.io/pvc-uid=85cf01bc-52fb-4bc1-a74f-58990b2988bb Annotations: <none> API Version: velero.io/v2alpha1 Kind: DataUpload Metadata: Creation Timestamp: 2026-01-14T13:43:01Z Generate Name: cormac-backup-1- Generation: 5 Owner References: API Version: velero.io/v1 Controller: true Kind: Backup Name: cormac-backup-1 UID: f577e717-58bf-453b-8de8-c5f3b866855e Resource Version: 12767 UID: f4da9b85-db30-4df7-9ebe-f61148c71c7c Spec: Backup Storage Location: default Csi Snapshot: Driver: csi.vsphere.vmware.com Snapshot Class: volumesnapshotclass-delete Storage Class: sfo-w01-cl01-optimal-datastore-default-policy-raid1 Volume Snapshot: velero-example-cormac-block-pvc-f968m Operation Timeout: 10m0s Snapshot Type: CSI Source Namespace: cormac-ns Source PVC: example-cormac-block-pvc Status: Accepted By Node: kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 Accepted Timestamp: 2026-01-14T13:43:01Z Completion Timestamp: 2026-01-14T13:43:49Z Node: kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 Node OS: linux Path: /f4da9b85-db30-4df7-9ebe-f61148c71c7c Phase: Completed Progress: Snapshot ID: 4ff960c3f4d2f55253926f36a66d5e50 Start Timestamp: 2026-01-14T13:43:20Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Data-Path-Started 4m2s cormac-backup-1-mswhk Data path for cormac-backup-1-mswhk started Normal Data-Path-Progress 4m2s (x2 over 4m2s) cormac-backup-1-mswhk {} Normal Data-Path-Completed 4m1s cormac-backup-1-mswhk {"snapshotID":"4ff960c3f4d2f55253926f36a66d5e50","emptySnapshot":false,"source":{"byPath":"/f4da9b85-db30-4df7-9ebe-f61148c71c7c","volumeMode":"Filesystem"}} Normal Data-Path-Stopped 4m1s cormac-backup-1-mswhk Data path for cormac-backup-1-mswhk stopped
This looks to have been successful. The final step of this post is to ensure we can do a successful restore of the application that we have just backed up. Let’s now do that.
Restore using Velero CLI
Start by deleting the “cormac-ns” namespace that was just backed up. It also deletes the Pod, PVC and resulting PV.
$ kubectl delete ns cormac-ns namespace "cormac-ns" deleted $ kubectl get pv No resources found
Begin the velero restore, using the –backup command to identify which backup to restore from. Again, we can use the describe command to check the status of the restore.
$ velero restore create cormac-restore-260114 --from-backup cormac-backup-1 Restore request "cormac-restore-260114" submitted successfully. Run `velero restore describe cormac-restore-260114` or `velero restore logs cormac-restore-260114` for more details. $ velero restore describe cormac-restore-260114 Name: cormac-restore-260114 Namespace: velero Labels: <none> Annotations: <none> Phase: Completed Total items to be restored: 56 Items restored: 56 Started: 2026-01-14 13:57:37 +0000 UTC Completed: 2026-01-14 13:58:06 +0000 UTC Warnings: Velero: <none> Cluster: <none> Namespaces: cormac-ns: could not restore, ConfigMap:kube-root-ca.crt already exists. Warning: the in-cluster version is different than the backed-up version Backup: cormac-backup-1 Namespaces: Included: all namespaces found in the backup Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Or label selector: <none> Restore PVs: auto CSI Snapshot Restores: cormac-ns/example-cormac-block-pvc: Data Movement: specify --details for more information Existing Resource Policy: <none> ItemOperationTimeout: 4h0m0s Preserve Service NodePorts: auto Uploader config: Restore Item Operations: 1 of 1 completed successfully, 0 failed (specify --details for more information) HooksAttempted: 0 HooksFailed: 0
And just like we saw with the backup, the Data Movement details can be observed if you use the –details options with this command. This will include the following additional details in the output, as well as the list of resources that were restored:
Restore PVs: auto CSI Snapshot Restores: cormac-ns/example-cormac-block-pvc: Data Movement: Operation ID: dd-90ca852c-7601-4c70-bd8e-8df9e07c4a31.85cf01bc-52fb-4bcba3ea6 Data Mover: velero Uploader Type: kopia Existing Resource Policy: <none> ItemOperationTimeout: 4h0m0s Preserve Service NodePorts: auto Uploader config: Restore Item Operations: Operation for persistentvolumeclaims cormac-ns/example-cormac-block-pvc: Restore Item Action Plugin: velero.io/csi-pvc-restorer Operation ID: dd-90ca852c-7601-4c70-bd8e-8df9e07c4a31.85cf01bc-52fb-4bcba3ea6 Phase: Completed Progress description: Completed Created: 2026-01-14 13:57:38 +0000 UTC Started: 2026-01-14 13:57:46 +0000 UTC Updated: 2026-01-14 13:58:01 +0000 UTC
The restore appears to have been successful. But lets verify using some kubectl commands.
$ kubectl get ns cormac-ns NAME STATUS AGE cormac-ns Active 2m22s $ kubectl get pod,pvc -n cormac-ns NAME READY STATUS RESTARTS AGE pod/cormac-pod-1 1/1 Running 0 2m32s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE persistentvolumeclaim/example-cormac-block-pvc Bound pvc-cf66b8fc-adc1-43e5-8a81-1fbac9e21e9a 5Gi RWO sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 2m32s $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-cf66b8fc-adc1-43e5-8a81-1fbac9e21e9a 5Gi RWO Delete Bound cormac-ns/example-cormac-block-pvc sfo-w01-cl01-optimal-datastore-default-policy-raid1 <unset> 2m37s
Success. It would appear as if everything has restored correctly. If there are issues with a restore, there are some useful CRs available once again.
Restore: Under the covers
Just like we saw with the backup, the following CRs, specifically downloadrequests and datadownlaods can be useful to query if there are some issue with a Velero restore operation on a VKS cluster using CSI snapshots.
$ kubectl get downloadrequests -n velero NAME AGE cormac-restore-260114-01fc60e9-d8c6-4efc-81f2-06f3e545272d 4m10s cormac-restore-260114-05bd4b4e-5ad6-4e52-a261-f4b89fb0bae3 4m10s cormac-restore-260114-2eba4517-ca71-4152-bb2a-fda141de8ed7 5m48s cormac-restore-260114-7bbf1ae8-5e65-4a5b-86e7-54334b3ddd0a 4m45s cormac-restore-260114-b926fbd2-8234-45c9-af1e-cd229d937504 4m10s cormac-restore-260114-debbab74-3e50-449e-95b1-ccbe4415f8c9 5m48s cormac-restore-260114-e133ec6f-6c8e-4951-8611-ffce0ee8ba8f 4m45s cormac-restore-260114-fd6ed61e-d99e-43e1-a3db-4e2c8644242f 4m11s $ kubectl describe downloadrequests cormac-restore-260114-01fc60e9-d8c6-4efc-81f2-06f3e545272d -n velero Name: cormac-restore-260114-01fc60e9-d8c6-4efc-81f2-06f3e545272d Namespace: velero Labels: <none> Annotations: <none> API Version: velero.io/v1 Kind: DownloadRequest Metadata: Creation Timestamp: 2026-01-14T13:59:30Z Generation: 2 Resource Version: 15170 UID: db0c147b-7b67-4e0d-92f7-6ff12105b992 Spec: Target: Kind: RestoreVolumeInfo Name: cormac-restore-260114 Status: Download URL: https://minio.rainpole.io:9000/velero-backups/restores/cormac-restore-260114/cormac-restore-260114-volumeinfo.json.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=admin%2F20260114%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20260114T135930Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=ddba7b633a6362e9ca648c301b7c5bb22fbf774230836c9ebe146f6c63a56ff5 Expiration: 2026-01-14T14:09:30Z Phase: Processed Events: <none> $ kubectl get datadownloads -n velero NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE cormac-restore-260114-v65f8 Completed 6m18s default 6m26s kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 $ kubectl describe datadownloads cormac-restore-260114-v65f8 -n velero Name: cormac-restore-260114-v65f8 Namespace: velero Labels: velero.io/async-operation-id=dd-90ca852c-7601-4c70-bd8e-8df9e07c4a31.85cf01bc-52fb-4bcba3ea6 velero.io/restore-name=cormac-restore-260114 velero.io/restore-uid=90ca852c-7601-4c70-bd8e-8df9e07c4a31 Annotations: <none> API Version: velero.io/v2alpha1 Kind: DataDownload Metadata: Creation Timestamp: 2026-01-14T13:57:38Z Generate Name: cormac-restore-260114- Generation: 5 Owner References: API Version: velero.io/v1 Controller: true Kind: Restore Name: cormac-restore-260114 UID: 90ca852c-7601-4c70-bd8e-8df9e07c4a31 Resource Version: 14953 UID: 1c8f7949-b20e-4878-96b8-46a7859423cf Spec: Backup Storage Location: default Data Mover Config: Write Sparse Files: false Node OS: linux Operation Timeout: 10m0s Snapshot ID: 4ff960c3f4d2f55253926f36a66d5e50 Source Namespace: cormac-ns Target Volume: Namespace: cormac-ns Pv: Pvc: example-cormac-block-pvc Status: Accepted By Node: kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 Accepted Timestamp: 2026-01-14T13:57:38Z Completion Timestamp: 2026-01-14T13:58:01Z Node: kubernetes-cluster-dkpp-kubernetes-cluster-dkpp-np-pk3w-smkvs46 Phase: Completed Progress: Start Timestamp: 2026-01-14T13:57:46Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Data-Path-Started 7m cormac-restore-260114-v65f8 Data path for cormac-restore-260114-v65f8 started Normal Data-Path-Progress 7m (x2 over 7m) cormac-restore-260114-v65f8 {} Normal Data-Path-Completed 7m cormac-restore-260114-v65f8 {"target":{"byPath":"/1c8f7949-b20e-4878-96b8-46a7859423cf","volumeMode":"Filesystem"}} Normal Data-Path-Stopped 7m cormac-restore-260114-v65f8 Data path for cormac-restore-260114-v65f8 stopped
And that completes the post. Hopefully this has demonstrated that Velero continues to be a very powerful command line tool for backing up and restoring Kubernetes workloads, not just vSphere Kubernetes Service (VKS) clusters. And if you have VCF Automation, remember that you do not need to do this manual approach, but instead you can use the VKS Management Data Protection tool via the UI.