Sending DSM 9.0 metrics to Prometheus & Grafana

Following on from my previous post on how to send VMware Data Services Manager (DSM) 9.0 metrics to VMware Cloud Foundation (VCF) Operations, I now want to show how it is also possible to send the DSM database metrics to Prometheus. In this post, I will demonstrate how to ship metrics to Prometheus, and of course, use Prometheus as a data source for Grafana to build a sample dashboard.

To make things a bit simpler for myself, I am going to use Helm to deploy both Prometheus and Grafana on my vanilla Kubernetes (K8s) cluster. The Prometheus and Grafana deployments can both be configured by providing a values.yaml file. This can be provided during the initial install, or can be added as an update to the deployment after it has been installed. The values.yaml is important as there are quite a number of bespoke configuration options that we need to add to Prometheus so that DSM can ship its metrics to it. For Grafana, an out of the box helm chart will work just fine, but that deployment can also be modified with its own values.yaml to make things a little easier to work with.

Note that you could also setup Prometheus in a Linux VM, but in my experience most users of Prometheus do so within Kubernetes, so that is the approach I am taking in this blog post.

There are four items to configure for Prometheus. These items can be summarised as (1) setup a user to provide basic authentication, (2) enable TLS and https access, (3) setup Remote Write Receiver so DSM can ship its metrics to it, and lastly (4) create a Load Balancer configuration for ease of access. Out of all of these, TLS is possibly the most challenging. To upload a Certificate Authority, Certificate and Key to the Prometheus server container,  Kubernetes secrets are used, and then these secrets get mounted onto the Prometheus container. The certificates and key can then be retrieved and used to provide TLS. Putting all of this together, the following is the values.yaml file that I used to configure the Prometheus deployment in my K8s cluster. In the configuration, we are also building a web.config.yml file to pass to Prometheus, and this file contains the Basic Auth and TLS configuration. I’ll show you how the Kubernetes secrets are created when we discuss the contents on the values.yaml in the next step.

server:
  tcpSocketProbeEnabled: true
  extraArgs:
    web.config.file: /etc/config/web.config.yml
  probeHeaders:
    - name: Authorization
      value: Basic YWxpY2U6dGVzdAo=
  extraSecretMounts:
    - name: ca-mount
      mountPath: /etc/config/ca-secret
      secretName: ca-secret
    - name: tls-mount
      mountPath: /etc/config/tls-secret
      secretName: tls-secret
  service:
    enabled: true
    servicePort: 443
    type: LoadBalancer
    additionalPorts:
    - name: metrics
      port: 9090
      targetPort: 9090
  extraFlags:
  - web.enable-remote-write-receiver
serverFiles:
  web.config.yml:
    basic_auth_users:
      alice: $2y$10$eWRWtiV6KvjpJqLyWbsrnuXrSJY/mCe9rUjd7kaif.0AdiM7HfQPWHVWX4TW7RL
    tls_server_config:
      key_file: /etc/config/tls-secret/tls.key
      cert_file: /etc/config/tls-secret/tls.crt
      client_ca_file: /etc/config/ca-secret/prom-ca.cert.pem
      client_auth_type: "VerifyClientCertIfGiven"
scrape_configs:
  - job_name: "prometheus"
    scheme: https
    static_configs:
      - targets: ["localhost:9090"]

Let’s discuss this values.yaml in a little more detail. In the serverFiles > web.conf.yml > basic_auth_users section, to enable basic auth, I have added a user called ‘alice’. I have also added a bcrypt password. To generate a bcrypt hashed password, check out the htpasswd commands referenced in the Prometheus docs for web configuration here. The password here is an encryption of the word ‘test’ to keep things simple.

In the server > extraSecretMounts section, I reference two secrets to add TLS, one for the CA and the other for the certificate+key combination. These objects are then referenced in the serverFiles > web.config.yml > tls_server_config section. The secrets must be created in the same Kubernetes namespace where Prometheus is installed. If Prometheus is installed in the default namespace, then create the secrets there. If Prometheus is installed in its own namespace, then the secrets must be in that namespace. Obviously, the CA, certificate and private key must be available before creating the secrets, built using a cert manager or possibly openssl. I don’t show how to do that here, but there are plenty of examples on the web. With the certificates and key now available, this is how to create those two K8s secrets.

# kubectl create secret generic ca-secret  --from-file=ca.cert 

# kubectl create secret tls tls-secret --cert=cert.crt --key=cert.key

Next, it is just a matter of matching the server > extraSecretMounts > mountPath entries to the serverFiles > web.config.yml > tls_server_config > *_file entries in the values.yaml.

The server > extraFlags entry is used to enable the remote-write-receiver which is needed later to allow DSM to do what is referred to as PrometheusRemoteWrite, which is basically to send metrics to Prometheus. The scrape_configs is Prometheus specific, but do ensure that the scheme is https, otherwise it defaults to http.

The server > service configuration in the values.yaml sets up Prometheus with a Load Balancer front end. This means we do not have to mess about with port forwarding commands to get access to the Prometheus UI. But this is an optional setting. If you do not have a Load Balancer, you can absolutely just use port forwarding, and the helm notes output will give you examples on how to do that.

With the values.yaml fully configured, you can now use it with the helm command to deploy Prometheus using the prometheus-community chart. If successful, it will deploy as follows. I have truncated most of the notes that are normally displayed.

$ helm install prometheus prometheus-community/prometheus  -f values.yaml
NAME: prometheus
LAST DEPLOYED: Thu Jul  3 10:09:15 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Prometheus server can be accessed via port 443 on the following DNS name from within your cluster:
prometheus-server.default.svc.cluster.local
...

If the load balancer is working, it should now be possible to open a browser using the following URL to access prometheus – https://<prometheus-load-balancer-ip-address> and it should display a query page similar to the following.

The next step is to begin shipping metrics from DSM to Prometheus. To do that, we need to ssh to the DSM appliance, and apply the following YAML manifest. The secret contains the basic auth credentials to access prometheus, matching what was added to the values. yaml previously. The metricstarget has the endpoint for Prometheus. You would need to modify this to your own Prometheus endpoint before apply it.

apiVersion: v1
kind: Secret
metadata:
  name: mt-creds
  namespace: default
type: kubernetes.io/basic-auth
stringData:
  username: alice
  password: test

---

apiVersion: observability.dataservices.vmware.com/v1alpha1
kind: MetricsTarget
metadata:
  name: metrics-namespace
  namespace: default
spec:
  type: PrometheusRemoteWrite
  endpoint: "https://<prometheus-lb-ip-addres>:9090/api/v1/write"
  timeout: 5s
  credentials:
    name: mt-creds

Apply the manifest, and check if the metricstarget is Ready:

# kg apply -f prometheus-payload-k8s.yaml 
secret/mt-creds created
metricstarget.observability.dataservices.vmware.com/metrics-namespace created

# kg get MetricsTarget metrics-namespace
NAME                STATUS
metrics-namespace   Ready

Now return to Prometheus and verify that the DSM metrics are available by simply putting the keyword “dsm” into the query field. It should expand automatically to show all DSM metrics, as shown here:

Excellent. Everything appears to be working as expected. We can now turn our attention to Grafana. This is a great tool for visualising metrics, and can very easily use Prometheus (which now includes our DSM metrics) as a data source. I followed the instructions from here to get Grafana installed onto my K8s cluster, once again using helm. I used a very simple values.yaml for my installation, primarily to setup a Load Balancer front end, but with a few additional options as well. The Grafana values.yaml is included here for completeness. I haven’t put any TLS onto this deployment so I will access it via http (port 80) once deployed.

service:
  enabled: true
  servicePort: 80
  type: LoadBalancer
  port: 80
  targetPort: 3000
persistence:
  type: pvc
  enabled: true
  # storageClassName: default
plugins:
# here we are installing two plugins, make sure to keep the indentation correct as written here.
- alexanderzobnin-zabbix-app
- grafana-clock-panel
# Administrator credentials when not using an existing secret / need to reset password on first login
adminUser: admin
adminPassword: admin

I used the following command to do the install. I again truncated the notes output:

$ helm install my-grafana grafana/grafana --namespace grafana -f values.yaml
NAME: my-grafana
LAST DEPLOYED: Thu Jul  3 10:56:25 2025
NAMESPACE: grafana
STATUS: deployed
REVISION: 1
NOTES:
...

When Grafana launches, provide the admin/admin credentials as per the values.yaml. It will prompt you to change the password. After doing so, you will be placed into the Grafana Home Landing Page. From there, select Data sources from the menu on the left hand side. Next, click on Add data source, and choose Prometheus. When the Prometheus settings page opens, add the Prometheus server URL with port 9090. Set Authentication to Basic authentication and provide the user created previously, in this case “alice”/”test”.You can skip TLS certificate validation in this setup, but for production, it is something you will need to include for sure. Scroll down to the end of the workflow and click the “Save & test” button. If the data source to your Prometheus deployment has been successful established, Grafana should highlight the fact.

We can now try to do a very simple dashboard to display some DSM metrics. Navigate to Data sources once more in the left hand menu, and Prometheus should now be displayed. There should also be an option to “Build a dashboard”. Click on this to begin dashboard creation.  Next, click on Add visualisation. Click on Prometheus as the data source. This will open a new panel, and here you can begin to add DSM metrics. Let’s add the available metrics for Postgres. In the Metric field, in Select metric, you can begin add “dsmpo” … and all of the Postgres metrics shipped from DSM should pop up automatically. Select the ones that you wish to add. To add more than one metric, click on the Add query button at the bottom of the page. You can also filter to specific databases, clusters, hosts and even pods if you wish. Click on the Run queries button to see what the dashboard looks like. You can now choose to save the dashboard. Here is a very simple one that I created to look at database connections to a particular database called pg02.

And here it is in the dashboards view:

That completes the post. Hopefully this has enough information to demonstrate how we can ship DSM 9.0 metrics to Prometheus, and how Grafana can then be used to create some visualisations around those database metrics.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.