The steps could be outlined as follows:
- Download Consumption Operator helm chart
- Create a Namespace on your local Kubernetes cluster for the Operator
- Create a secret to pull consumption operator images from a registry
- Create a secret to access Data Services Manager
- Build a values override file with your specific environment details
- Install the Consumption Operator
- Create a ‘dev’ namespace with access to specific infrastructure policy and backup location
- Create a database using this ‘dev’ namespace
Step 1: Download the Consumption Operator helm chart
The Consumption Operator comes as a helm chart. You will need to install helm to deploy it. The command to download the operator is show here. Note that this is for DSM v2.0, and the operator is v1.0.0. For later releases of DSM, the version of operator is also subject to change, so use the latest images when using the latest versions of DSM. You can navigate to “projects.registry.vmware.com” and check out the DSM artefact versions there.
% helm pull oci://projects.registry.vmware.com/dsm-consumption-operator/dsm-consumption-operator --version 1.0.0 -d consumption/ --untar Pulled: projects.registry.vmware.com/dsm-consumption-operator/dsm-consumption-operator:1.0.0 Digest: sha256:0692ea7d59b4207baee8d772068241f9deb213213705d0206b2079a12aa5ba15 % ls consumption/dsm-consumption-operator Chart.yaml crds open_source_license_vmware-data-services-manager-consumption-operator_1.0.0_ga.txt templates values.yaml
Step 2: Create Namespace for Consumption Operator
Ensure that the kubeconfig context is set to your local K8s cluster (the one doing the consuming). Then create a namespace on your local K8s cluster for the Consumption Operator.
% kubectl create namespace dsm-consumption-operator-system namespace/dsm-consumption-operator-system created
Step 3: Create Registry Secret
No authentication is required to access the default VMware Harbor Registry. If you are air-gapped and have pulled the necessary images down to your own local registry, modify the secret accordingly.
% kubectl -n dsm-consumption-operator-system create secret docker-registry registry-creds \ --docker-server=https://projects.registry.vmware.com \ --docker-username=ignore \ --docker-password=ignore secret/registry-creds created
Step 4: Create DSM Secret
% kubectl -n dsm-consumption-operator-system create secret generic dsm-auth-creds \ --from-file=root_ca=root-ca \ --from-literal=dsm_user=provider@broadcom.com \ --from-literal=dsm_password=VMware123! \ --from-literal=dsm_endpoint=https://xx.xx.xx.xx secret/dsm-auth-creds created
Step 5: Create a values_override.yaml file
This file serves a number of purposes. It tells the Consumption Operator:
- The secret which stores credentials to access Data Services Manager (DSM)
- Which infrastructure policy (or policies) should be used by DSM for provisioning the database
- Which backup location(s) should be used by DSM for backing up the database
- The name of the local K8s cluster that is using the Consumption Operator
- Any special privileges required on the local K8s cluster to provision the Consumption Operator
If you are new to Data Services Manager, it might be worth reading this introductory blog post which explains concepts such as infrastructure policies.
Make sure the name of the dsm.authSecretName matches the secret name that you created in step 4. Here is an example of that this file might look like from my environment. I am only allowing one infrastructure policy and one backup location. You will probably need to make some changes in your override file to reflect your DSM environment and local K8s cluster name.
imagePullSecret: registry-creds replicas: 1 image: name: projects.registry.vmware.com/dsm-consumption-operator/consumption-operator tag: 1.0.0 dsm: authSecretName: dsm-auth-creds # allowedInfrastructurePolicies is a mandatory field that needs to be filled with allowed infrastructure policies for the given consumption cluster allowedInfrastructurePolicies: - global-infra-policy # allowedBackupLocations is a mandatory field that holds a list of backup locations that can be used by database clusters created in this consumption cluster allowedBackupLocations: - dsm2-db-backup # consumptionClusterName is an optional name that you can provide to identify the Kubernetes cluster where the operator is deployed consumptionClusterName: "tkg-v1-23-8-01" # if there is a PSP setting on your k8s cluster, set the below value to true and attach a psp role to it. You can look for psp: required:false
Step 6: Install the Consumption Operator
Everything is now in place to install the Consumption Operator. Again, we use helm to do this step. Note that the values_override.yaml built in step 5 is referenced during the installation.
% helm install dsm-consumption-operator consumption/dsm-consumption-operator -f values_override.yaml --namespace dsm-consumption-operator-system NAME: dsm-consumption-operator LAST DEPLOYED: Wed Feb 21 12:35:48 2024 NAMESPACE: dsm-consumption-operator-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing dsm-consumption-operator. Your release is named dsm-consumption-operator. To learn more about the release, try: $ helm status dsm-consumption-operator $ helm get all dsm-consumption-operator To find out the deployed custom resource definitions, try: $ kubectl get crds |grep dataservices.vmware.com
After the operator is install, a few additional commands can be used to check the status. Ensure that the controller manager pod is ready and running.
% helm status dsm-consumption-operator --namespace dsm-consumption-operator-system NAME: dsm-consumption-operator LAST DEPLOYED: Wed Feb 21 12:35:48 2024 NAMESPACE: dsm-consumption-operator-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing dsm-consumption-operator. Your release is named dsm-consumption-operator. To learn more about the release, try: $ helm status dsm-consumption-operator $ helm get all dsm-consumption-operator To find out the deployed custom resource definitions, try: $ kubectl get crds |grep dataservices.vmware.com % kubectl get all -n dsm-consumption-operator-system NAME READY STATUS RESTARTS AGE pod/dsm-consumption-operator-controller-manager-79c849579c-nxggk 1/1 Running 0 4m16s pod/dsm-consumption-operator-mutating-webhook-configuration-pa8xpvp 0/1 Completed 1 4m16s pod/dsm-consumption-operator-validating-webhook-configuration-j29pg 0/1 Completed 1 4m16s pod/dsm-consumption-operator-webhook-server-cert-create-spld9 0/1 Completed 0 4m16s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dsm-consumption-operator-webhook-service ClusterIP 10.109.216.190 <none> 443/TCP 4m17s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/dsm-consumption-operator-controller-manager 1/1 1 1 4m16s NAME DESIRED CURRENT READY AGE replicaset.apps/dsm-consumption-operator-controller-manager-79c849579c 1 1 1 4m16s NAME COMPLETIONS DURATION AGE job.batch/dsm-consumption-operator-mutating-webhook-configuration-patch 1/1 5s 4m17s job.batch/dsm-consumption-operator-validating-webhook-configuration-patch 1/1 5s 4m17s job.batch/dsm-consumption-operator-webhook-server-cert-create 1/1 3s 4m17s
The Consumption Operator is now successfully installed. The next step is to create a ‘tenant’ on your local Kubernetes cluster, the consumption cluster. This tenant will have the ability to provision databases via DSM.
If you do not observe any running pods, it could be due to the security context, especially on TKG. You may need to add psp.required set to true in the override_values.yaml file, and select a role with appropriate Pod Security Policy privileges, e.g:
psp: required: true role: "psp:vmware-system-restricted"
Be aware that PodSecurityPolicy is being deprecated as of K8s v1.23, and will be removed from K8s versions 1.25 and later. See here for further details.
Step 7: Create a User Namespace with DSM Bindings
The concept of a ‘tenant’ is achieved by building a user Namespace on the K8s cluster. This ‘tenant’ will then be able to request DSM to create databases using one of the allowed infrastructure policies, defined in the override file, and send backup to an appropriate backup location which were also defined in the overrides file. It is possible to create multiple namespaces with different infrastructure policies and backup locations in the same local/consumption K8s cluster. Different ‘tenants’ can send requests to the same Data Services Manager to deploy databases on their behalf.
Here is my Namespace/Bindings file which create a “dev-team” namespace on my local K8s cluster. This dev team can build databases using the infrastructure policy and backup location described in the bindings.
apiVersion: v1 kind: Namespace metadata: name: dev-team --- apiVersion: infrastructure.dataservices.vmware.com/v1alpha1 kind: InfrastructurePolicyBinding metadata: name: global-infra-policy namespace: dev-team --- apiVersion: databases.dataservices.vmware.com/v1alpha1 kind: BackupLocationBinding metadata: name: dsm2-db-backup namespace: dev-team % kubectl apply -f dev-team-ns.yaml namespace/dev-team created infrastructurepolicybinding.infrastructure.dataservices.vmware.com/global-infra-policy created backuplocationbinding.databases.dataservices.vmware.com/dsm2-db-backup created % kubectl get backuplocationbindings -A NAMESPACE NAME STATUS dev-team dsm2-db-backup Ready % kubectl get infrastructurepolicybindings -A NAMESPACE NAME STATUS dev-team global-infra-policy Ready
Everything is now in place to build a database.
Step 8: Create a PostgreSQL Database
This is the manifest that I use to create the database. It is very similar to some of the other manifests we have seen in my earlier post around the Gateway API available in DSM. Here is a link to my GitHub Repo with some other database manifests that you might want to try.
apiVersion: databases.dataservices.vmware.com/v1alpha1 kind: PostgresCluster metadata: name: pg-dev-k8s-01 namespace: dev-team spec: replicas: 1 version: "14" vmClass: name: medium storageSpace: 60Gi infrastructurePolicy: name: global-infra-policy storagePolicyName: "vSAN Default Storage Policy" backupConfig: backupRetentionDays: 30 schedules: - name: full-weekly type: full schedule: "0 0 * * 0" - name: incremental-daily type: incremental schedule: "0 0 * * *" backupLocation: name: dsm2-db-backup
Assuming no issues, the database should provision successfully.
% kubectl get postgresCluster -A NAMESPACE NAME STATUS STORAGE VERSION AGE dev-team pg-dev-k8s-01 Ready 60Gi 14 27m
And if we check on the DSM UI:
Success! We have deployed a database from a remote K8s cluster via DSM through the DSM Consumption Operator. Now, the final part you might be wondering about is how does one connect to the database. Do I need to access the DSM UI? No, that’s not necessary. Rest assured that all the necessary connection information is available via Kubernetes. I can retrieve the database name IP Address, username, and password. I’ve obfuscated some of the IP Address below intentionally.
% kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{.status.connection.host}} xx.xx.51.178% % kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{.status.connection.dbname}} pg-dev-k8s-01% % kubectl get secrets pg-dev-k8s-01 -n dev-team --template={{.data.password}} | base64 --decode tBEz7D25HXD77DFl7mglZH388VS5dR% % kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{.status.connection.username}} pgadmin%
Thanks for reading this far. I hope this post has demonstrated the power and usefulness of the DSM Consumption Operator for provisioning databases remotely from Kubernetes clusters. The scripts and files used in this blog post can be found on this GitHub repo.