Sending DSM 9.0 metrics to VCF Operations

In the DSM 9.0 Release Notes, the following item about metrics is listed in the What’s New section: You can use the VMware Data Services Manager API to publish PostgreSQL and MySQL metrics to VMware Cloud Foundation 9.0 (VCF) Operations and Prometheus [..] enabling better visibility, alerting, and performance management for all databases that VMware Data Services manages. In this post, I will show how to configure DSM 9.0 to send Postgres and MySQL database metrics to VCF 9.0 Operations. While this process is rather manual in VCF 9.0, we plan to significantly improve this overall experience for users going forward. As well as having a simplified configuration, we also plan to have some predefined dashboards for users to fine-tune. But today, with DSM 9.0 & VCF 9.0, users who wish to use this functionality need to implement the following steps.

Step 1: Configure a Cloud Proxy

The first thing to be aware of is that you will need to configure a Cloud Proxy to collect data from your physical data centres. A cloud proxy collects data from the end-point environment and uploads it to VCF Operations. Once configured, you will need the FQDNIP address and Proxy ID of the Cloud Proxy to continue this setup. There is a Cloud Proxy FAQ available to assist with this initial configuration.

Step 2: Retrieve a VCF Operations Token

This step retrieves the certificates and the private key from VCF Operations so that we can establish secure communication to it from DSM. DSM can then begin to ship database metrics to VCF Operations. Begin by retrieving a token from VCF Operations. Run the following curl commands to get the token. The “-d” option is the data sent as the payload. Note the placement of the various single and double quotes in the command, especially on the payload. These are important to get right. The below command is split over a number of lines to make it more readable, but with Powershell, the full command should be run on a single line.

curl -k 
-X POST https://<vcfops-fqdn>/suite-api/api/auth/token/acquire 
-H "Content-Type: application/json" 
-d '{ "username":"<vcfops-username>","password":"<vcfops-password>"}'

e.g.,

curl -k 
-X POST https://flt-ops01a.rainpole.io/suite-api/api/auth/token/acquire 
-H "Content-Type: application/json" 
-d '{ "username":"admin","password":"VMw@re1!VMw@re1!"}'
{"token":"bed00e15-f668-4c6d-8853-3629d3bcc5e4::8c51b567-99b0-4004-844b-e95db98e28e6","validity":1751575396439,"expiresAt":"Thursday, July 3, 2025 at 8:43:16 PM Coordinated Universal Time","roles":[]}

Retrieve the token for the next step. In this example, the token is: “bed00e15-f668-4c6d-8853-3629d3bcc5e4::8c51b567-99b0-4004-844b-e95db98e28e6

Step 3: Retrieve Certificates & Key from Cloud Proxy

Now that we have the token, we can use it to create a second curl command to retrieve Certificate Authority, Certificate and Private Key.

The command needs the following information:

  1. Token: “bed00e15-f668-4c6d-8853-3629d3bcc5e4::8c51b567-99b0-4004-844b-e95db98e28e6”
  2. ClientId: “46f2d470-07cc-4b82-bd4b-3c007e0f4649” (this can be retrieved from the VCF Operations UI as shown above)
  3. Cloud Proxy IP address: “10.11.10.38” (this can also be retrieved from the VCF Operations UI as shown above)

The curl command will send all of the certificates and key to a zip file called “dsm.zip”. Again. the command is shown across multiple lines for visibility, but should be executed on a single line.

curl -k 
-X 'GET' 'https://<vcfops-fqdn>/suite-api/api/applications/clientCertificate/<cloud-proxy-ip-address>?clientId=<cloud-proxy-client-id>&_no_links=true' 
-H 'accept:application/octet-stream' 
-H 'Authorization: OpsToken <token>' 
--output "dsm.zip"

e.g.,

curl -k 
-X 'GET' 'https://flt-ops01a.rainpole.io/suite-api/api/applications/clientCertificate/10.11.10.38?clientId=46f2d470-07cc-4b82-bd4b-3c007e0f4649&_no_links=true' 
-H 'accept:application/octet-stream' 
-H 'Authorization: OpsToken bed00e15-f668-4c6d-8853-3629d3bcc5e4::8c51b567-99b0-4004-844b-e95db98e28e6' 
--output "dsm.zip"

Check to ensure that the zip file was created successfully.

dir

    Directory: C:\MYDIR

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a---            7/3/2025    16:05           7028 dsm.zip

Step 4: Build DSM Objects to ship metrics to VCF Operations

Extract the zip file, and use the certificates and key to build a metricstarget object on the DSM gateway. Note that the “dsm.zip” file contains both an encrypted Private Key and an unencrypted Private Key. Choose the unencrypted one. Next, use the “based64 -w 0” to encrypt the certificate and private key as the tls.crt and tls.key entries in the mt-client-cert secret shown below. Also note that IP addresses are not included the VCF Operations Cloud Proxy certificate. Therefore you should use the FQDN of the Cloud Proxy as the endpoint in the MetricsTarget YAML rather than IP address since there is no IP SANS in the certificate.

Three objects must be created on DSM Gateway – (1) a secret with VCFOps/Cloud Proxy TLS cert/key, (2) a ConfigMap with VCF Operations Certificate Authority and (3) a MetricsTarget object to configure the endpoint to send DSM metrics to VCFOps. This is the YAML manifest that I made to create these objects on the DSM Gateway.

apiVersion: v1
kind: Secret
metadata:
  name: mt-client-cert
  namespace: dsm-system
type: kuberenetes.io/tls
data:
  tls.crt: LS0tLS1CRUdJTiBDR--<base64 encoded certificate>--FURS0tLS0tCg==
  tls.key: LS0tLS1CRUdJTiBQU--<base64 encoded private key>--UgS0VZLS0tLS0K
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mt-trust
  namespace: dsm-system
data:
  tls.crt: |
    -----BEGIN CERTIFICATE-----
    MIIDxTCCAq2gAwIBAgIUO9PrJ9f/v6tQgvJRIgtxv2xRO3wwDQYJKoZIhvcNAQEL
    BQAwajELMAkGA1UEBhMCSU4xCzAJBgNVBAgMAktBMQwwCgYDVQQHDANCTFIxFzAV
    .
    .
    --<truncated>--
    .
    .
    CVarJhIVpwLpv/0MCaoihemanLmvEIP+mM40zIipMukRBZDiDUwXf1HkARZM5CV7
    TV34DHUe41KV
    -----END CERTIFICATE-----
---
apiVersion: observability.dataservices.vmware.com/v1alpha1
kind: MetricsTarget
metadata:
  name: metrics-default
  namespace: dsm-system
spec:
  endpoint: "https://<fqdn-of-cloud-proxy>:8443/opensource/default/metric"
  timeout: 5s
  type: VCFOps
  tls:
    clientCertificate:
      name: mt-client-cert
    trustBundle:
      name: mt-trust
      namespace: dsm-system

Whilst you can download the DSM Gateway kubeconfig from the DSM UI, and run the necessary kubectl commands locally to create the objects defined in the the above YAML, I have opted to open a shell directly on the DSM Appliance. Note that the “kg” command is an alias to kubectl which also automatically sets the –kubeconfig argument to point at the API server configuration for the DSM Gateway on the DSM appliance. It is a useful shortcut, rather than typing out the full path to the kubeconfig for every command. Use the “kg apply -f <manifest.yaml>” to create the objects on the DSM appliance. If everything is configured correctly and successfully, the following status should be visible on the Metrics-Target.

# kg get metricstarget  metrics-default -n dsm-system
NAME              STATUS
metrics-default   Ready


# kg describe  metricstarget  metrics-default -n dsm-system
Name:         metrics-default
Namespace:    dsm-system
Labels:       dsm.vmware.com/credentials-config=
              dsm.vmware.com/trust-config=mt-trust
Annotations:  dsm.vmware.com/client-certificate-config-version: dsm-system/mt-client-cert/111157
              dsm.vmware.com/credentials-config-version:
              dsm.vmware.com/trust-config-version: dsm-system/mt-trust/111055
API Version:  observability.dataservices.vmware.com/v1alpha1
Kind:         MetricsTarget
Metadata:
  Creation Timestamp:  2025-07-04T09:07:38Z
  Finalizers:
    metricstargets.observability.dataservices.vmware.com/finalizer
  Generation:        1
  Resource Version:  111159
  UID:               940ac72d-99b1-4476-8a7f-a85d2083e914
Spec:
  Endpoint:  https://<fqdn-of-cloud-proxy>:8443/opensource/default/metric
  Timeout:   5s
  Tls:
    Client Certificate:
      Name:                mt-client-cert
    Insecure Skip Verify:  false
    Trust Bundle:
      Name:       mt-trust
      Namespace:  dsm-system
  Type:           VCFOps
Status:
  Conditions:
    Last Transition Time:  2025-07-04T09:25:18Z
    Message:               Metrics target server connection is established and verified
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
Events:                    <none>

Step 5: Observe DSM Metrics in VCF Operations

It may take up to 15 minutes for the DSM metrics to become visible in VCF Operations. To check for DSM metric, navigate to the VCF Operations UI and lookup your database name in the Search. You will get several hits, including some system VCF Operations objects like Endpoint, Universe, Environment, OS and Application Monitoring Adapter.

The actual metrics are under objects called “dsmpostgres_GENERIC” or “dsmmysql_GENERIC” which are related in the topology of the objects you find. Some OS metrics are available in a “Linux OS” object.

You will also find PostgreSQL/MySQL Database and Application objects which also contain some metrics.

You can now proceed with building dashboards to highlight the metrics that you are most interested in.

Summary

The nice part of this setup compared to previous versions of Data Service Manager is that you only need to do this configuration once, and all of the metrics from DSM will flow into VCF Operations via the Cloud Proxy. Previous versions of DSM integrated to Aria Operations on a per database basis, meaning that administrators had to configure a connection for each database deployed. This is much simplified approach, but of course we plan to make it more seamless by allowing administrators to connect DSM to VCF Operations without needing to retrieve tokens and the like. In an upcoming post, I will also show you how to send these database metrics from DSM 9.0 to Prometheus. Please reach out if you have any questions.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.