Getting Started with Data Services Manager 2.0 – Part 4: Scaling Up and Out
Continuing on with my series of blog posts which examine the new features and functionality of VMware Data Services Manager version 2.0, this post will look at some of the vertical scaling (scale up) and horizontal scaling (scale out) in DSM 2.0. We start with a standalone PostgreSQL database deployment, and then focus on the scale out initially by changing the topology from a single node to a three node database cluster. In a standalone Postgres database, the Primary role and the Monitor role exist as separate Pods on the same VM. By changing the topology, the Primary and Monitor are relocated to their own separate VMs, and a replica is also created on its own VM. After that we will change the topology from 3 nodes to 5 nodes, which will add two additional replicas. Note that in 2.0, all of the VMs are going to be placed on the same vSphere cluster.
For scale out, I will show how the VM Class and Disk Size can be changed simultaneously. This will create the nodes in the cluster with new compute resources and new disk sizes. I will finish this post by showing how to edit the advanced database options and applying these to the database cluster.
Note: DSM version 2.0 is not yet generally available. Therefore some of the screenshots captured in the upcoming posts may differ slightly in the final product. However, for the purposes of our getting started series of posts, this should not matter.
Change topology from standalone to 3 node
I have already deployed a standalone, single node PostgreSQL database. If we login to the DSM UI and navigate to Databases > (name of database) > Summary, it is possible to see the Primary and Monitor are indeed on the same VM when the Database Replication is viewed. This means that there is no replication of the database at present.
Let’s proceed with changing the topology from standalone to three nodes and having at least one Replica with the primary for availability. Click on the Edit link next to Database Replication. Change the Replication Mode to Single vSphere Cluster and select a Topology of 3 (1 Primary, 1 Replica, 1 Monitor).
This will automatically initiate the creation of additional VMs and grow the database cluster to three nodes. It will also split the Primary and Monitor Pods from the same K8s node/VM and place them on different K8s nodes/VMs. Once the operation completes, Database Replication will be updated with the new VMs and their respective roles, as shown below.
Change topology from 3 node to 5 node
Let’s assume that this is a very critical database and we want to ensure optimum availability in the event of failures. To address either of these requirements, we will add some additional replicas. The same process is repeated but this time the Topology selected is 5 (1 Primary, 3 Replicas, 1 Monitor).
This results in the automatic creation of two additional VMs, both of which assume the role of Replica. The Database Replication view shows the new topology.
Now that we have seen how to scale out, let’s see how we can scale up.
Change Infrastructure Policy for VMs
In this section, both the VM’s compute (CPU & Memory) and it’s disks will be resized. For the compute, a new VM class will be chosen in the Infrastructure Policy. Currently, the VM class is set to ‘medium’. Simultaneously the disk will be resized from the default value of 60GB to a new size of 80GB. To begin, click on the Edit option in the Infrastructure section as shown above.
There is no option to select a different storage policy as only one storage policy was added to the infrastructure policy when it was created. If multiple storage policies were added to the infrastructure policy, a new storage policy could also be selected at this point. However, we did add multiple VM Classes to the infrastructure policy so it is possible to select a new one for the VMs that are provisioned to host the database or data service. Click on the drop-down arrow to see the available VM Classes.
In this scenario, we will change the VMs from medium to the large VM class. The CPU and Memory sizes are also visible alongside the class for reference. Assume that we also wanted to create a larger disk. Simply modify the Disk Size field with the new size.
Now click Save and a new set of VMs with the new resource characteristics selected in the infrastructure policy will be created and added to the database cluster in a rolling fashion. The previous set of VMs with the original resource characteristics will eventually be removed from the database cluster. When the operation completes, the database cluster will be made up of VMs using the large VM Class and have a database disk size of 80GiB.
Edit Database Options
A final operation to show you is the fact that one can make changes to the database configuration after it has been provisioned. Of course, these database options can also be configured when the database is created, but it is also possible to alter the database after the fact. Say for instance that the number of connections to the database needs to increase. By default, the PostgreSQL databases are provisioned by DSM with a max_connections value of 100. Let’s say that the database has been running for some time, and new users now require access. Therefore, we wish to increase this to 200. Simply click on the Edit next to the Database Option Details in the Summary tab and click on Add Parameter.
Next, type in the Parameter Name and Value. In this case, as mentioned, we wish to increase max_connections to 200.
Click Save, and the PostgreSQL configuration is updated across all nodes in the database cluster. The Summary > DB > Database Options Details should now reflect the configuration change.
Another check we can do to make sure that the change has taken effect on the database is to connect to the database directly, and query the max_connections parameter. Looks like it has indeed, according to the query below:
There are other configuration options on the Summary page which we have not discussed, namely backup/restore, maintenance and lifecycle management. These will be discussed separately in future ‘Getting started’ blog posts. However, I hope this post has given you some idea into how Data Services Manager can provide an administrator with the ability to seamlessly scale and configure databases in a supported manner as the need arises.
One Reply to “Getting Started with Data Services Manager 2.0 – Part 4: Scaling Up and Out”
Comments are closed.