Getting Started with Data Services Manager 2.0 – Part 10: Consumption Operator

One of the common asks we get from customers on Data Services Manager (DSM) 2.0 is the following: “I already run Kubernetes. Is it possible to create databases from my existing Kubernetes clusters using DSM?”. The answer is Yes. We provide a piece of software called the DSM Consumption Operator. This installs on your local Kubernetes (K8s) cluster and allows admins or developers to request the creation of databases (PostgreSQL, MySQL). On receipt of this request, DSM provisions its own K8s cluster, and then provisions the database on top. Your admins or developers can then connect to the database and use it as they deem fit. In this post, I will walk through the deployment of the DSM Consumption Operator onto a TKGs workload cluster running K8s v1.23.8 deployed via vSphere with Tanzu. I will then use the Consumption Operator to request DSM to provision me a new PostgreSQL database. This blog post should be used alongside the official DSM 2.0 documentation.

The steps could be outlined as follows:

  1. Download Consumption Operator helm chart
  2. Create a Namespace on your local Kubernetes cluster for the Operator
  3. Create a secret to pull consumption operator images from a registry
  4. Create a secret to access Data Services Manager
  5. Build a values override file with your specific environment details
  6. Install the Consumption Operator
  7. Create a ‘dev’ namespace with access to specific infrastructure policy and backup location
  8. Create a database using this ‘dev’ namespace

Step 1: Download the Consumption Operator helm chart

The Consumption Operator comes as a helm chart. You will need to install helm to deploy it. The command to download the operator is show here:

% helm pull oci:// --version 1.0.0 -d consumption/ --untar
Digest: sha256:0692ea7d59b4207baee8d772068241f9deb213213705d0206b2079a12aa5ba15

% ls consumption/dsm-consumption-operator

Step 2: Create Namespace for Consumption Operator

Ensure that the kubeconfig context is set to your local K8s cluster (the one doing the consuming). Then create a namespace on your local K8s cluster for the Consumption Operator.

% kubectl create namespace dsm-consumption-operator-system
namespace/dsm-consumption-operator-system created

Step 3: Create Registry Secret

No authentication is required to access the default VMware Harbor Registry. If you are air-gapped and have pulled the necessary images down to your own local registry, modify the secret accordingly.

% kubectl -n dsm-consumption-operator-system create secret docker-registry registry-creds \
  --docker-server= \
  --docker-username=ignore \
secret/registry-creds created

Step 4: Create DSM Secret

This secret contains details on how to access Gateway API on the DSM environment. Note that TLS is required, so the certificate will need to be obtained from the DSM provider. The official document details how to do this. In the command below, I have obfuscated the IP Address of my DSM appliance, but make sure that you include the https:// prefix. Note the name of the secret that you use, as you will need to add this to the values override file later. Here, I have called it dsm-auth-creds.
% kubectl -n dsm-consumption-operator-system create secret generic dsm-auth-creds \
 --from-file=root_ca=root-ca \ \
 --from-literal=dsm_password=VMware123! \
secret/dsm-auth-creds created

Step 5: Create a values_override.yaml file

This file serves a number of purposes. It tells the Consumption Operator:

  1. The secret which stores credentials to access Data Services Manager (DSM)
  2. Which infrastructure policy (or policies) should be used by DSM for provisioning the database
  3. Which backup location(s) should be used by DSM for backing up the database
  4. The name of the local K8s cluster that is using the Consumption Operator
  5. Any special privileges required on the local K8s cluster to provision the Consumption Operator

If you are new to Data Services Manager, it might be worth reading this introductory blog post which explains concepts such as infrastructure policies.

Make sure the name of the dsm.authSecretName matches the secret name that you created in step 4. Here is an example of that this file might look like from my environment. I am only allowing one infrastructure policy and one backup location. You will probably need to make some changes in your override file to reflect your DSM environment and local K8s cluster name.

imagePullSecret: registry-creds
replicas: 1
  tag: 1.0.0
  authSecretName: dsm-auth-creds

  # allowedInfrastructurePolicies is a mandatory field that needs to be filled with allowed infrastructure policies for the given consumption cluster
  - global-infra-policy

  # allowedBackupLocations is a mandatory field that holds a list of backup locations that can be used by database clusters created in this consumption cluster
  - dsm2-db-backup

# consumptionClusterName is an optional name that you can provide to identify the Kubernetes cluster where the operator is deployed
consumptionClusterName: "tkg-v1-23-8-01"

# if there is a PSP setting on your k8s cluster, set the below value to true and attach a psp role to it. You can look for

Step 6: Install the Consumption Operator

Everything is now in place to install the Consumption Operator. Again, we use helm to do this step. Note that the values_override.yaml built in step 5 is referenced during the installation.

% helm install dsm-consumption-operator consumption/dsm-consumption-operator -f values_override.yaml --namespace dsm-consumption-operator-system
NAME: dsm-consumption-operator
LAST DEPLOYED: Wed Feb 21 12:35:48 2024
NAMESPACE: dsm-consumption-operator-system
STATUS: deployed
Thank you for installing dsm-consumption-operator.

Your release is named dsm-consumption-operator.
To learn more about the release, try:
  $ helm status dsm-consumption-operator
  $ helm get all dsm-consumption-operator
To find out the deployed custom resource definitions, try:
  $ kubectl get crds |grep

After the operator is install, a few additional commands can be used to check the status. Ensure that the controller manager pod is ready and running.

% helm status dsm-consumption-operator --namespace dsm-consumption-operator-system
NAME: dsm-consumption-operator
LAST DEPLOYED: Wed Feb 21 12:35:48 2024
NAMESPACE: dsm-consumption-operator-system
STATUS: deployed
Thank you for installing dsm-consumption-operator.
Your release is named dsm-consumption-operator.

To learn more about the release, try:
  $ helm status dsm-consumption-operator
  $ helm get all dsm-consumption-operator

To find out the deployed custom resource definitions, try:
  $ kubectl get crds |grep

% kubectl get all -n dsm-consumption-operator-system
NAME                                                                  READY   STATUS      RESTARTS   AGE
pod/dsm-consumption-operator-controller-manager-79c849579c-nxggk      1/1     Running     0          4m16s
pod/dsm-consumption-operator-mutating-webhook-configuration-pa8xpvp   0/1     Completed   1          4m16s
pod/dsm-consumption-operator-validating-webhook-configuration-j29pg   0/1     Completed   1          4m16s
pod/dsm-consumption-operator-webhook-server-cert-create-spld9         0/1     Completed   0          4m16s
NAME                                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/dsm-consumption-operator-webhook-service   ClusterIP   <none>        443/TCP   4m17s
NAME                                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dsm-consumption-operator-controller-manager   1/1     1            1           4m16s
NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/dsm-consumption-operator-controller-manager-79c849579c   1         1         1       4m16s
NAME                                                                        COMPLETIONS   DURATION   AGE
job.batch/dsm-consumption-operator-mutating-webhook-configuration-patch     1/1           5s         4m17s
job.batch/dsm-consumption-operator-validating-webhook-configuration-patch   1/1           5s         4m17s
job.batch/dsm-consumption-operator-webhook-server-cert-create               1/1           3s         4m17s

The Consumption Operator is now successfully installed. The next step is to create a ‘tenant’ on your local Kubernetes cluster, the consumption cluster. This tenant will have the ability to provision databases via DSM.

If you do not observe any running pods, it could be due to the security context, especially on TKG. You may need to add psp.required set to true in the override_values.yaml file, and select a role with appropriate Pod Security Policy privileges, e.g:

  required: true
  role: "psp:vmware-system-restricted"

Be aware that PodSecurityPolicy is being deprecated as of K8s v1.23, and will be removed from K8s versions 1.25 and later. See here for further details.

Step 7: Create a User Namespace with DSM Bindings

The concept of a ‘tenant’ is achieved by building a user Namespace on the K8s cluster. This ‘tenant’ will then be able to request DSM to create databases using one of the allowed infrastructure policies, defined in the override file, and send backup to an appropriate backup location which were also defined in the overrides file. It is possible to create multiple namespaces with different infrastructure policies and backup locations in the same local/consumption K8s cluster. Different ‘tenants’ can send requests to the same Data Services Manager to deploy databases on their behalf.

Here is my Namespace/Bindings file which create a “dev-team” namespace on my local K8s cluster. This dev team can build databases using the infrastructure policy and backup location described in the bindings.

apiVersion: v1
kind: Namespace
  name: dev-team
kind: InfrastructurePolicyBinding
  name: global-infra-policy
  namespace: dev-team
kind: BackupLocationBinding
  name: dsm2-db-backup
  namespace: dev-team

% kubectl apply -f dev-team-ns.yaml
namespace/dev-team created created created

% kubectl get backuplocationbindings -A
dev-team    dsm2-db-backup   Ready

% kubectl get infrastructurepolicybindings -A
NAMESPACE   NAME                  STATUS
dev-team    global-infra-policy   Ready

Everything is now in place to build a database.

Step 8: Create a PostgreSQL Database

This is the manifest that I use to create the database. It is very similar to  some of the other manifests we have seen in my earlier post around the Gateway API available in DSM. Here is a link to my GitHub Repo with some other database manifests that you might want to try.

kind: PostgresCluster
  name: pg-dev-k8s-01
  namespace: dev-team
  replicas: 1
  version: "14"
    name: medium
  storageSpace: 60Gi
    name: global-infra-policy
  storagePolicyName: "vSAN Default Storage Policy"
    backupRetentionDays: 30
      - name: full-weekly
        type: full
        schedule: "0 0 * * 0"
      - name: incremental-daily
        type: incremental
        schedule: "0 0 * * *"
    name: dsm2-db-backup

Assuming no issues, the database should provision successfully.

% kubectl get postgresCluster -A
dev-team    pg-dev-k8s-01   Ready    60Gi      14        27m

And if we check on the DSM UI:

Success! We have deployed a database from a remote K8s cluster via DSM through the DSM Consumption Operator. Now, the final part you might be wondering about is how does one connect to the database. Do I need to access the DSM UI? No, that’s not necessary. Rest assured that all the necessary connection information is available via Kubernetes. I can retrieve the database name IP Address, username, and password. I’ve obfuscated some of the IP Address below intentionally.

% kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{}}

% kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{.status.connection.dbname}}

% kubectl get secrets pg-dev-k8s-01 -n dev-team --template={{.data.password}} | base64 --decode

% kubectl get postgresclusters pg-dev-k8s-01 -n dev-team --template={{.status.connection.username}}

Thanks for reading this far. I hope this post has demonstrated the power and usefulness of the DSM Consumption Operator for provisioning databases remotely from Kubernetes clusters. The scripts and files used in this blog post can be found on this GitHub repo.