Getting Started with Data Services Manager 2.0 – Part 2: Configuring Appliance

In part 1, we saw how to deploy the Data Services Manager (DSM) appliance. We also saw how to create our first infrastructure policy, and also create a permission which will allow a ‘data administrator’ to logon to the DSM appliance. In this second blog in the series, we will use this admin user to complete the DSM appliance setup tasks. Note that the DSM Appliance is often referred to as the Provider, and you will see this term used regularly in the UI and the logs. Once the Provider setup is completed, the ‘data administrator’ will then be able to provision data services. First a note about this ‘data administrator’ persona. While this role is responsible for completing the setup of the DSM appliance, it is also be used to create, manage and monitor data services. It takes care of the life-cycle management of the DSM appliance as well as the data services. It is also the persona for monitoring and troubleshooting an issues with the data services. The role can also assign ownership of data services to end user. While this might be a separate role to the vSphere administrator, we feel that many vSphere administrators will also be able to fulfil this role. Our plan is to make all of the day-0 through to day-2 operations in DSM as intuitive as possible. I’ll cover many of these operations in a future post, but for now, let’s refocus on the steps to complete the configuration of the DSM appliance.

As mentioned, one of the final steps in part 1 was to create a DSM Admin permission. We use this permission to log onto the DSM appliance. The dashboard view, which is the default landing page, display a bunch of Settings which are not yet configured. Whilst some of the settings are optional, such as DSM Login LDAP and Log Forwarding, others are required. Warnings are displayed to emphasize the fact that a Provider Repo and a database backup storage are definitely required before any data services can be provisioned.

Tanzu Net Token

In the Left Hand Menu, navigate to the bottom item, Settings. In the first screen, Information, notice that the Tanzu Net Token is not configured. For non-air-gapped environment which have internet connectivity, DSM reached out to Tanzu Net to pull down the data services images. Note that this can also be done manually for air-gapped environments, but in this case I am do have internet connectivity so I should add the token by clicking on the ‘Add’ link under Actions, as shown below:

After populating the token (which can be retrieve from your Tanzu Net account by the way), the token should show as Configured.

Storage Settings

The next step is to add a number of AWS or S3 compatible buckets for the Provider, as well as a bucket for the data services backup destination. The provider itself requires 3 x buckets, one of data service images/templates, another for backups and a third for logs. If required, multiple backup destinations for the data services may be created, but only one is actually required. Thus, you will need some sort of solution outside of DSM to host these buckets. These could be provisioned via some cloud based solution (e.g. AWS) , or via some on-premises based solution (e.g. MinIO). Either would work. Let’s take a look at what the Storage Settings look like initially:

As can clearly be seen, none of the Repos have been configured. For the purposes of this blog post, I used MinIO’s on-premises solution to offer the S3 compatible buckets for the Repos. This is an example of the sort of information that needs populating to configure a Repo.

After adding the Provider Repo URL, connecting and then saving it, the configuration will look something similar to the following:

This step, if using the Tanzu Net method for downloading images, will trigger what is known as a Release Processing job. This means that all of the necessary images to build out data services via DSM will be downloaded to the Provider Repo bucket. However, before deploying any data services, you will now need to repeat this storage configuration process for the Provider Log Repo URL, the Provider Backup Report Repo URL and the Database Backup Storage. Once all 4 buckets have been added, the Storage Settings are complete. Note that after adding the Provider Repo URL, and already configuring the Tanzu Net Token, a Release Processing task is initiated which adds the supported data services images/templates to the Repo. When everything is configured, the Storage Settings should look similar to the following:

Those steps are enough to allow the data administrator to begin provisioning data services. Of course, there are other configuration options, such as SMTP Settings, LDAP Settings, Log Forwarding and webhook Settings. SMTP is configured to send email alerts about events occurring with the data services. Similarly, webhook is configured to send alerts to slack or ServiceNow, for example. Log Forwarding simply sends logs from the Provider to Log Insight/Aria Operations for Logs. We are able to ship logs from DSM to either the on-premises solutions or the SaaS/Cloud offering. LDAP settings, which is configured either via the DSM UI or via the vSphere Client Permissions section seen in part 1, allows configuring LDAP Users for accessing the DSM appliance. However, all of these settings are optional and does not prevent a data service from being deployed.

The dashboard view in the DSM UI should now show all configured items:

And the External Storage view should also show the health status of each of the configured components.

We will look at the creation of a data service, either a PostgreSQL database or MySQL database in part 3 of this series next.

One Reply to “Getting Started with Data Services Manager 2.0 – Part 2: Configuring Appliance”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.