A first look at DPp (Data Persistence platform) and MinIO

Today I want to take a closer look at the new vSAN Data Persistence platform (DPp). I mentioned that this was a key reason for updating my VMware Cloud Foundation environment to version 4.2, which officially released last week. One of the services included in DPp is the MinIO S3 compatible object store. Although I have written about MinIO a number of time on this site, the fact that it is now incorporated as a service in the new DPp makes it even easier to deploy than ever before. In this post, we will look at the steps involved in enabling the MinIO S3 service in DPp. We will first enable the MinIO Operator & associated vSphere plugin. Then we will go ahead and create a dedicated S3 object store for a “tenant” in vSphere with Tanzu on VCF (also known as VCF with Tanzu). This is achieved by assigning VCF with Tanzu Namespace it’s own unique MinIO S3 object store.

Enable MinIO Service

Let’s start with a view of Supervisor Services. We had a brief look at these when we upgraded my VMware Cloud Foundation (VCF) environment to version 4.2. Since this is VCF, NSX-T is also available. This means that we can deploy PodVMs, the underlying building block for these services. Without NSX-T, it is not possible currently to deploy PodVMs, and thus it won’t be possible to use these services.

Anyway, as you can see, there are 3 available services in this release: MinIO, Cloudian Hyperstore (also an S3 Object Store) and the Velero vSphere Operator (for backing up and restoring Kubernetes objects). In this post, we will focus on MinIO. In future posts, we will look at the other services.

To enable the MinIO service, simply select it. The Enable option highlights as shown below, so click on that next.

The next step is to select a Repository endpoint from which to pull the MinIO images. You could simply leave these “blank” as the default.  This will pull the images directly from docker hub. However, there may be occasions that, either on the advice of VMware or MinIO, you might need to pull these from another repository. I’ve shown the fields as populated below for reference, but for your deployment you can just leave them blank (and hope you do not encounter the new docker rate limiting issue). Note also that there is no http/https prefix on the endpoint. Finally, there is the option to add labels to the MinIO objects.

The next step is to accept the EULA, and finally click on Finish. MinIO provides a 60 day evaluation period, after which you will need to purchase a license.

This will now create a new Namespace on your Supervisor Cluster (e.g. minio-domain-X) and begin to provision a number of PodVMs, including the MinIO Operator. Here is a view of the PodVMs from the vSphere client.

The PodVMs can be examined by logging onto the Supervisor cluster, and changing context to the new minio-domain-x. This step can also be used for troubleshooting, such as describing the PodVM for events, displaying PodVM logs, etc.

$ kubectl-vsphere login --vsphere-username administrator@vsphere.local \
--server=https://20.0.0.1 \
--insecure-skip-tls-verify


Password: ********
Logged in successfully.


You have access to the following contexts:
   20.0.0.1
   cormac-ns
   minio-domain-c8


If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.


To change context, use `kubectl config use-context <workload name>`

$ kubectl config use-context minio-domain-c8
Switched to context "minio-domain-c8".

$ kubectl get pods

NAME                                        READY   STATUS    RESTARTS   AGE
minio-minio-operator-578f6c9c44-fpsn6       1/1     Running   0          22m
minio-vsphere-75bfb45cd7-l2gdx              2/2     Running   0          22m
minio-vsphere-75bfb45cd7-rhqzb              2/2     Running   0          22m
minio-vsphere-75bfb45cd7-zr9fq              2/2     Running   0          22m
minio-vsphere-controller-597c7db9c4-hd9jm   1/1     Running   0          22m
$

The deployment also installs a MinIO plugin to the vSphere client and provides some new items under Cluster > Manage > MinIO. From here we can proceed to the next step of enabling a Tenant, and assign that Tenant S3 Object Store to another Namespace within the Supervisor cluster. This Namespace could be allocated to either a developer or a team of developers, and they can have sole use of this S3 Object Store. Below is the Minio > General view.

Create a MinIO Tenant

Let’s proceed with the creation of a MinIO Tenant. The steps are pretty straight forward. To make things simple, I have already created a new Supervisor cluster Namespace called minio-ns. In this Namespace, I added a single storage policy — the default vSAN storage policy (if you want to learn more about the vSphere with Tanzu constructs, check out this earlier blog post which covers such topics). This is where I am going to place the tenant S3 object store. First step is to navigate to Cluster > Manager > Minio > Tenant. From here, click on the ADD button to create a new Tenant.

As I mentioned, I have previously create a namespace, so you simply add this here, along with the name and the desired Storage Class. You can add whatever name you like. The Namespace must exist however, and the wizard will only accept existing Namespaces. The Storage Class will be the list of Storage Classes that you have associated with the Namespace. Other options are available at this point, such as vSAN Direct, but that is another post. Let’s try to keep things simple for the moment.

The next step in the process is to size the tenant. I’ve selected the defaults, but as you can see over the next few screenshots, there are a number of configurable parameters here:

Note the Number of Nodes set to 4. MinIO will try to provision each of its application nodes on a K8s “worker” node, which in the case of the Supervisor cluster is an ESXi host. Thus, for this tenant size to work, you will need the appropriate number of Supervisor cluster “agent” nodes to accommodate it. Here is the list of Supervisor nodes from my environment. As you can see, there are the 3 control plane VMs and 4 worker/agent ESXi hosts. This is sufficient to roll out the MinIO service as a 4 node configuration.

% kubectl get nodes
NAME                               STATUS   ROLES                  AGE     VERSION
422517915769421519ea0375b1ac8970   Ready    control-plane,master   2d16h   v1.22.6+vmware.wcp.2
4225bf8f595eb52f3b3d516dd71f9fa8   Ready    control-plane,master   2d16h   v1.22.6+vmware.wcp.2
4225f3ed148c70af9de3cbbdd82b2fe1   Ready    control-plane,master   2d16h   v1.22.6+vmware.wcp.2
w4-hs4-i1501.eng.vmware.com        Ready    agent                  2d16h   v1.22.6-sph-2d0356d
w4-hs4-i1502.eng.vmware.com        Ready    agent                  2d16h   v1.22.6-sph-2d0356d
w4-hs4-i1503.eng.vmware.com        Ready    agent                  2d16h   v1.22.6-sph-2d0356d
w4-hs4-i1504.eng.vmware.com        Ready    agent                  2d16h   v1.22.6-sph-2d0356d

I won’t get into how MinIO does Erasure Coding to provide highly available storage for the service, but I did cover this in a previous post on MinIO here for those who are interested in learning more about it.

Once you have completed the sizing, preview the configuration, and click Create:

The final step in the wizard is to provide you with access credentials, both for the MinIO S3 Object Storage and the MinIO Management Console. Keep these in a safe place.

Another set of PodVMs are launched in the namespace specified previous, in this example, minio-ns.

We can also examine the PodVMs from the CLI, as we did with the MinIO Operator.

$ kubectl-vsphere login --vsphere-username administrator@vsphere.local \
--server=https://20.0.0.1 \
--insecure-skip-tls-verify

Password: ********
Logged in successfully.

You have access to the following contexts:
   20.0.0.1
   cormac-ns
   minio-domain-c8
   minio-ns


If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

$ kubectl config use-context minio-ns
Switched to context "minio-ns".

$ kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
minio-tenant-console-84474ddfbd-pkh7d   1/1     Running   0          42m
minio-tenant-zone-0-0                   1/1     Running   0          42m
minio-tenant-zone-0-1                   1/1     Running   0          42m
minio-tenant-zone-0-2                   1/1     Running   0          42m
minio-tenant-zone-0-3                   1/1     Running   0          42m

And soon afterwards, once all of the storage configuration is complete, we should see then MinIO tenant configuring and eventually appear online:

Select the tenant name and click on the DETAILS link to get a more detailed view. You can see in the information below that the Console Endpoint is still not available, and that there are still volumes that are not yet configured (only 57 out of 64). We can also see that tenant usage is not yet available.

However, after a little more time, the tenant appears fully online. 64 volumes are online, the 4 nodes / server instances are online, the 16TB of storage is available, and both the MinIO Endpoint and the Console Endpoint are populated.

Let’s now check both the MinIO Management Console and S3 object store.

Verify functionality

As you can see, then MinIO tenant provides a link to both the Console and the S3 Object Store. These are unique MinIO instances for that tenant in that Namespace. Once they have the access credentials, they have access to both. As a vSphere administrator, you may like to keep the Console privileges for yourself, but provide your developer or team of developers with access to the object store only. Either way, whoever has the credentials has access.

Below are three ways in which the object store can be accessed. The first is via the MinIO Browser. As you can see, I have already created a bucket via the browser called my-first-bucket. I can now use this MinIO browser interface to upload files to the bucket, if I wish.

This second method is via the Console. As you can see, this also has a view of the buckets that my tenant has created. The console can give you the ability to manage and monitor the S3 object store.

Lastly, and probably more importantly, I can access the bucket via a 3rd party S3 client, so long as I provide the access credentials. This is an S3 browser that I have used in the past from NetSDK Software. It seems to work well for me, but this is not an endorsement. Please do your own research on what a suitable S3 client might be for you. Anyway, as you can see, this client can also access my-first-bucket and has the ability to upload files as well.

I hope that gives you a good idea how you can quickly deploy multiple MinIO S3 object stores in various vSphere with Tanzu / VCF with Tanzu deployments, using vSAN DPp, on a per Namespace basis. This gives your developers (or teams of developers) easy access to an on-premises S3 object store should they need it.

Stay tuned while I look at the other services provided by DPp over the coming weeks.