vSAN Data Persistence platform (DPp) Revisited

Around 18 months ago, I published an article which highlighted a new feature called vSAN Data Persistence platform, or DPp for short. Basically, it describes a set of vSphere services built into vSphere with Tanzu. There are a few changes since I last wrote about it. For that reason, I thought I would revisit it. I am going to use my recently updated vCenter Server version 7.0.3e (build 19717403), and vSphere with Tanzu Supervisor Cluster v1.22. In this post, I will go through the new steps that demonstrate how to install MinIO as a vSphere Service. I will then show how to use this service to provision an S3 Object Store to a vSphere Namespace in vSphere with Tanzu.

The major change to the previous version is that the partner services used to be automatically registered and available for selection in the vSphere client. In the newer version, the partner service must be registered manually. This is done by downloading a YAML manifest for the particular service from a special repository. Registering the partner service means that the vSphere administrator can then proceed to installing the partner service/operator. After that, services from the partner (e.g. MinIO S3 Object Stores) can be provisioned to the various vSphere Namespaces – referred to as tenants. I will show how this is done shortly.

Note that there are a number of requirements to use vSAN Data Persistent platform, and these have not changed. Mainly, there is a requirement on NSX-T since the services continue to use PodVMs. PodVMs are only available with NSX-T at the time of writing. There is also a dependency on vSAN which provides the persistent storage for the services. vSAN can be used in 2 modes; vSAN Direct or Shared Nothing Architecture (SNA). You could also use vSAN in “normal” mode, which entrails using standard vSAN protection policies. However, these consume far too much capacity. The data is protected both internally by the application as well as externally by vSAN. Refer to the official documentation for details about each mode. However, suffice to say that if vSAN Direct is chosen, which is the more performant option, the entire vSAN cluster must be dedicated to vSAN Direct workloads. If SNA is chosen, the vSAN cluster may be shared with other workloads, including VMs. We will use the SNA approach. Last but not least, vSphere with Tanzu is also a requirement.

I have the appropriate infrastructure in place, vSphere with Tanzu is deployed, and I have some vSphere Namespaces created. Lets proceed with the deployment of the MinIO service.

1. Download the partner YAML

The first step is to download the partner service YAML. This is found in the https://vmwaresaas.jfrog.io/ repository. Simply navigate to the appropriate partner folder located under Artifacts > vDPP-Partner-YAML and download the associated YAML manifest. Below is the JFrog repository when the services are stored. I have navigated to the MinIO service, but you can see other partner services are also available. Click on the URL to file, and download it.

2. Add a new vSphere Service

Navigate to the Workload Management view in the vSphere client. By default, only a single service exists, the VM Service. Click on ADD in the Add New Service box, as shown below:

This takes you to the “Register Service” step which is where you provide the YAML manifest for the partner service downloaded previously. Click on the UPLOAD button and select the YAML which was previously downloaded.

Once uploaded, details about the manifest that was just provided are displayed. Review the details, ensuring that this is indeed the partner service that you wish to install. If it is, click Next.

Once the MinIO service has been registered, you should observe a new service box under the Workload Management > Services view.

This will now present MinIO as a vSphere Service that can be deployed either from the Actions drop-down box above, or directly from the vSphere client. We will deploy the MinIO operator onto the Supervisor Cluster next.

3. Deploy MinIO vSphere Service on Supervisor

Now that the MinIO service is registered, it can be installed on the Supervisor. Navigate to Cluster > Configure > vSphere Services > Overview in the client. Under the Available tab, MinIO should be listed. Select it, and click on INSTALL.

Administrators are now presented with the following install screen. There is currently only one version of the MinIO service currently available, v2.0.0. Other information that could be provided by the administrator relates to registry information. This is useful where rate limiting for anonymous accounts is encountered. If deploying in an air-gap environment, it is also possible to download the MinIO operator images from the docker hub repository and store them in a local image registry. At this point, you can redirect the registry information to a local registry. In this case, I have provided docker hub credentials so as to avoid using an anonymous user, avoiding the rate limiting mentioned earlier. Administrators should be able leave these entries blank if neither of the scenarios above apply to them. However, I noticed that in this version I did have to supply the registryName as “index.docker.io” for the deployment to proceed. I have raised this with the engineering team to see why.

Once the relevant components of the service are installed in the Supervisor cluster, the administrator is prompted to refresh the vSphere client. A number of changes are made to the vSphere client. First, the MinIO service now appears in the vSphere Services installed tab, as shown below.

New menu options are also added to the vSphere client, so as to be able to add a license, as well as provision MinIO S3 Object Stores to tenants in different vSphere Namespaces.

Note: You may observe a number of additional ‘Failed’ PodVMs get deployed as part of the MinIO service. These do not impact the operation of the service, and can be manually deleted from the Supervisor cluster if required.

4. Create a MinIO tenant in a vSphere Namespace

Now that the service is installed, we can move onto the next step. This is the deployment of a MinIO S3 Object Store in a vSphere Namespace. To add a new tenant, select Tenants from the MinIO menu, and select ADD.

One the first landing page, the name of the tenant needs to be provided. You will also need to provide the vSphere Namespace where the object store is to be provisioned. You will also need to select a Storage Class. This is where you can select between vSAN Direct, vSAN SNA and even normal vSAN objects. The storage class will be tied directly to a vSAN policy, and only the storage policies that have been added to the vSphere Namespace will be visible. In the tenant below, I have chosen to provision the object store in the vSphere Namespace called cormac-ns, and have chosen a vSAN SNA policy.

Note that there is an Advanced Mode option that allows for the configuration of many additional parameters. For the purposes of this demonstration, I am just going to stick with the defaults. The next screen related to Tenant Size. The minimum number of MinIO servers (deployed as a StatefulSet) is 4. The one confusing part of me was the allocation of CPUs to each server. In my environment, each physical ESXi host has 40 CPUs. However, the tenant logic seems to try to allocate as many as 28 CPUs to each server. The issue with that, is that in the event of a failure, vSphere HA may not be able to accommodate the 4 MinIO servers if it is set to tolerate 1 host failure. This prevents one of the PodVMs in the MinIO servers stateful set from coming online. I have reported this back to the engineering team, but for the moment, ensure that you modify the CPU allocation to the servers with a number that can be accommodated by your vSphere HA setting. I have set my CPU allocation to 16.

Preview the configuration, and make careful note of the Credentials when they are displayed. This is how you access the MinIO console once the tenant configuration is deployed.

Assuming everything deploys successfully, a new MinIO S3 Object Storage should now be available within the cormac-ns vSphere Namespace.

5. MinIO Tenant Details and Health

The vSphere clients provides some very useful information around capacity and status for the MinIO tenant. Below are screenshots taken from the Details View and the Health View. The details view provides us with information such as the MinIO Endpoint (for bucket access) as well as the Console Endpoint for management purposes (although the IP addresses have been obfuscated below):

The health view provides a list of the PodVMs which have been built to hold the object store, as well as which ESXi hosts they are running on.

All of the the PodVMs can be seen in the cormac-ns vSphere Namespace in the vSphere client inventory.

6. MinIO Console Endpoint

Using the credentials that were presented during tenant creation, administrators can now point their browsers to the MinIO Console Endpoint and login. Here is an example of what the initial landing page looks like in my environment.

The MinIO Endpoint can be used by applications that wish to push data to the S3 Object Store, e.g. Velero backup and restores for example.

That completes the post. Note that other S3 Object Stores also are available from DELL and Cloudian, and that Velero vSphere Plugin is also available as a service on vSphere with Tanzu.