Getting Started with Data Services Manager 2.0 – Part 9: Lifecycle Management

In previous posts, a number of benefits of Data Services Manager (DSM) were highlighted. Features such as automated backups, ease of scaling, as well as comprehensive monitoring and alerting were highlighted. Another feature which is a big differentiator in DSM is lifecycle management. In this post, I am going to show the steps in upgrading the DSM appliance/provider, essentially the DSM control plane, when a new update is available. Of course, there is also a plan for lifecycle management on the databases and data services, but that will be a topic for another post. In this post, I will take…

Data Services Manager 2.0 – Infrastructure Policy (Video)

I have created a new video showing how to create an Infrastructure Policy in Data Services Manager (DSM) version 2.0. Infrastructure Policies enable VI Admins to control which vSphere resources are being consumed by data services and databases provisioned by DSM. The video will also cover how to build an IP Pool and a VM Class, as well as choosing Compute Resources, Portgroups and Storage Policies to create a complete Infrastructure Policy.

Getting Started with Data Services Manager 2.0 – Part 8: DSM Appliance Restore

A common question I get asked when giving talks on Data Service Manager (DSM) version 2.0 is “what happens to the databases and data services when the DSM appliance / provider has an outage?” The simple answer is that nothing happens to your data services or databases – they will continue to run as before. Of course, without the DSM appliance, you do not have access to the UI to manage and monitor the databases, nor do you have access to the gateway API. Therefore it won’t be possible to provision new data services. So the next obvious question is…

Getting Started with Data Services Manager 2.0 – Part 7: DSM (Gateway) API

One of the guiding goals for Data Services Manager (DSM) 2.0 is to provide a very rich Kubernetes compliant API server for end-users and developers. We refer to this as the Gateway API. In this post, I will demonstrate the Gateway API server’s capabilities and show how it can be used to query the state of the objects that are provisioned through DSM, and also how to modify and manipulate these objects. The kubectl command line interface is used. We will use this tool to query and modify some of the infrastructure components, as well as query the existing data…

Getting Started with Data Services Manager 2.0 – Part 6: Day 2 Operations

In this post, I want to demonstrate some of the key “day 2” features of Data Service Manager 2.0. Day 2 operations are typically operation that an administrator might carry out after a data service / database is already deployed and configured. This blog will discuss operations such as the ability for a DSM administrator to assign database ownership to a different DSM User. We will also see how it is possible to both clone an existing database and how to restore a new database from a backup. These operations will be done from the UI but I did want…

Getting Started with Data Services Manager 2.0 – Part 5: Webhooks

Data Services Manager (DSM) 2.0 continues to provides detailed monitoring and alerting, similar to what was available in DSM version 1.x. It continues to offer both email alerting as well as webhook integration to send notifications to Slack and ServiceNow. In this post, we will look at some of the changes in the User Interface for configuring webhooks. For the purposes of this post, we will examine the configuration of a webhook to send notifications to Slack. The creation of the webhook itself is identical to how it was configured in version 1.x, so there is no point in discussing…

Getting Started with Data Services Manager 2.0 – Part 4: Scaling Up and Out

Continuing on with my series of blog posts which examine the new features and functionality of VMware Data Services Manager version 2.0, this post will look at some of the vertical scaling (scale up) and horizontal scaling (scale out) in DSM 2.0. We start with a standalone PostgreSQL database deployment, and then focus on the scale out initially by changing the topology from a single node to a three node database cluster. In a standalone Postgres database, the Primary role and the Monitor role exist as separate Pods on the same VM. By changing the topology, the Primary and Monitor…