Provisioning databases with Aria Automation, Cloud Consumption Interface and Data Services Manager – Part 3: CCI Config
In this series of posts, we saw in part 1 how to setup Aria Automation version 8.17. This is required for Cloud Consumption Interface support. In part 2, we saw how to enable the Cloud Consumption interface (CCI) in the Supervisor of vSphere with Tanzu. However, even though CCI is now deployed as a Supervisor Service, it is not yet completely configured to work with Aria Automation. Thus, it is still not yet possible for an Aria Automation user to interact with the Supervisor in vSphere with Tanzu to create TKG clusters or VMs using the VM Service. This is what I want to show you in this post. We are going to create a number of objects using the kubectl command line tool, stepping through the exercise outlined in the official documentation here & here. This will integrate CCI with Aria Automation. To make things a little easier, I have made a repo up on GitHub which contains all of the YAML manifests that I have used to create the Projects, Regions and Supervisor Namespace Classes during this exercise. Let’s start by showing how to login to the Cloud Consumer Interface.
Login to the Cloud Consumer Interface, CCI
The first step is to login to CCI. Although the official documentation talks about using a username and password mechanism, I could only login to CCI with a token. Thus, after downloading the kubectl cci tool (see the official docs for location), I used the following curl command pointed at my Aria Automation to get the token for my active directory user amaury@rainpole.com. My Aria Automation is called vra.rainpole.com.
% curl --insecure --location --request POST \ 'https://vra.rainpole.com/csp/gateway/am/api/login?access_token' \ --header 'Content-Type: application/json' \ --data-raw '{ "username": "amaury@rainpole.com", "password": "VMware123!", "domain": "rainpole.com"}' {"refresh_token":"xgiANA0UKPWoeIjavIXgVrUhYui2zj9b"}%
[Update] These is a known issue with the version of CCI plugin referenced in the documentation. This KB article has links to more recent CCI plugins which are supported with Aria Automation v8.17. These have the username option as well as the token option.
Next, the token above is used to login to CCI. Note that the Aria Automation URL does not take any prefix or postfix.
% kubectl cci login --server vra.rainpole.com --token=xgiANA0UKPWoeIjavIXgVrUhYui2zj9b \ --insecure-skip-tls-verify Logging into vra.rainpole.com Getting supervisor namespaces Successfully logged into vra.rainpole.com Created kubeconfig contexts: cci
The login reports the available contexts. There should be one called ‘cci’. Set the context to cci.
% kubectl config use-context cci Switched to context "cci".
Now that we have switched contexts, we can now begin to create projects, regions, and supervisor namespace classes. All of these are needed before our user can use Aria Automation to interact with the Supervisor.
Create Projects, Regions, Namespace Classes
This part of the process is to show the various YAML manifests and their configuration. Before creating those objects, it is a good idea to first check that you are able to communicate successfully with the Supervisor using the kubectl tool. First, try to list the various supervisor namespaces after logging into the CCI.
% kubectl get ns NAME STATUS AGE amaury-ns Active 3h56m cormac-ns Active 46h default Active 47h kube-node-lease Active 47h kube-public Active 47h kube-system Active 47h svc-cci-service-domain-c144 Active 46h svc-dsm-domain-c144 Active 17m svc-tmc-c144 Active 47h tanzu-cli-system Active 47h vmware-system-ako Active 47h vmware-system-appplatform-operator-system Active 47h vmware-system-capw Active 47h vmware-system-cert-manager Active 47h vmware-system-csi Active 47h vmware-system-imageregistry Active 47h vmware-system-kubeimage Active 47h vmware-system-license-operator Active 47h vmware-system-logging Active 47h vmware-system-monitoring Active 47h vmware-system-netop Active 47h vmware-system-nsop Active 47h vmware-system-pinniped Active 47h vmware-system-pkgs Active 47h vmware-system-supervisor-services Active 47h vmware-system-tkg Active 47h vmware-system-ucs Active 47h vmware-system-vmop Active 47h
And it would also be useful to verify that you can list the supervisors. This would be especially useful when there are many supervisor clusters that you could use for provisioning. This is one of the principals behind the CCI, where it is easy to for a user to interact with many Supervisor Clusters that a customer may have in their environment. With VCF, every workload domain could contain a Supervisor Cluster. In this example, there is only a single supervisor (vSphere with Tanzu) deployment.
% kubectl -n cci-config get supervisors NAME STATUS REGION AGE vcsa-01.rainpole.com:domain-c144 On 46h
Communication seems to be working successfully. We can now begin to create the necessary objects. Let’s begin with a Project. This task will create a Project for the user in Aria Automation. This is what a sample manifest looks like – you would probably want to change the name and description.
% cat project.yaml apiVersion: project.cci.vmware.com/v1alpha1 kind: Project metadata: name: amaury-project spec: description: Amaury Project sharedResources: true
Now create the project:
% kubectl create -f project.yaml project.project.cci.vmware.com/amaury-project created
Next, setup a project role binding manifest. This gives user amaury and admin role within the project. Again, you would need to modify some of the entries such as namespace and name for your setup. Ensure you set the namespace to match the project name used previously.
% cat project-role-binding.yaml apiVersion: authorization.cci.vmware.com/v1alpha1 kind: ProjectRoleBinding metadata: name: cci:user:rainpole.com:amaury namespace: amaury-project roleRef: apiGroup: authorization.cci.vmware.com kind: ProjectRole name: admin subjects: - kind: User name: amaury@rainpole.com
Create the project role binding:
% kubectl create -f project-role-binding.yaml projectrolebinding.authorization.cci.vmware.com/cci:user:rainpole.com:amaury created
Next, we need to create a region. Here is my example of a region manifest. Again, set your own name and description.
% cat region.yaml apiVersion: topology.cci.vmware.com/v1alpha1 kind: Region metadata: name: amaury-region spec: description: Amaury in Spain
Create the region:
% kubectl create -f region.yaml region.topology.cci.vmware.com/amaury-region created
We now need to make a few updates to the Supervisor. The reason for this is to control which Supervisors we wish the user amaury has access to. First we need to add the region, but we are also going to add a label so that the supervisor can be identified by some additional objects that we are yet to create. To add the region, edit the supervisor using kubectl edit as shown below and add the new region and a regionNames section to the spec. You can check that it has worked by doing a simple get on the supervisor, as shown below. To begin with, the REGION field is empty. We will edit the supervisor object to add it.
% kubectl -n cci-config get supervisors NAME STATUS REGION AGE vcsa-01.rainpole.com:domain-c144 On 46h % kubectl -n cci-config edit supervisor vcsa-01.rainpole.com:domain-c144 supervisor.infrastructure.cci.vmware.com/vcsa-01.rainpole.com:domain-c144 edited
In the editor, change the spec section of the supervisor object from:
spec: cloudAccountName: vcsa-01.rainpole.com displayName: CJH-Cluster-1 externalId: domain-c144
To:
spec: cloudAccountName: vcsa-01.rainpole.com displayName: CJH-Cluster-1 externalId: domain-c144 regionNames: - amaury-region
To check the actual config, use the following command to display the YAML. The sections in bold are what needs to be added.
% kubectl -n cci-config get supervisors -o yaml apiVersion: v1 items: - apiVersion: infrastructure.cci.vmware.com/v1alpha1 kind: Supervisor metadata: annotations: infrastructure.cci.vmware.com/cloud-account-id: aa52edd1-0c98-448f-bd58-581f5bf74caa creationTimestamp: "2024-06-22T12:17:06Z" labels: {} name: vcsa-01.rainpole.com:domain-c144 namespace: cci-config uid: 6c9af89855683f033b40ec82aed826888fd6f7ea spec: cloudAccountName: vcsa-01.rainpole.com displayName: CJH-Cluster-1 externalId: domain-c144 regionNames: - amaury-region status: powerState: "On" kind: List metadata: resourceVersion: ""
You can see from the above output that there is currently no label associated with this supervisor. We are going to edit the supervisor once more and add a label environment:testing. This will be used to identify the supervisor by some upcoming objects.
% kubectl -n cci-config edit supervisors vcsa-01.rainpole.com:domain-c144 supervisor.infrastructure.cci.vmware.com/vcsa-01.rainpole.com:domain-c144 edited
After making the changes, the YAML output for the supervisor should look similar to the following, with both the label and region added:
% kubectl -n cci-config get supervisors -o yaml apiVersion: v1 items: - apiVersion: infrastructure.cci.vmware.com/v1alpha1 kind: Supervisor metadata: annotations: infrastructure.cci.vmware.com/cloud-account-id: aa52edd1-0c98-448f-bd58-581f5bf74caa creationTimestamp: "2024-06-22T12:17:06Z" labels: environment: testing name: vcsa-01.rainpole.com:domain-c144 namespace: cci-config uid: 6c9af89855683f033b40ec82aed826888fd6f7ea spec: cloudAccountName: vcsa-01.rainpole.com displayName: CJH-Cluster-1 externalId: domain-c144 regionNames: - amaury-region status: powerState: "On" kind: List metadata: resourceVersion: ""
Check that the region is now visible on the supervisor:
% kubectl -n cci-config get supervisors NAME STATUS REGION AGE vcsa-01.rainpole.com:domain-c144 On amaury-region 46h
The next step is to complete the region setup. Two manifests must be applied, a region binding and a region binding configuration. Note the supervisor selector part of the config which is using the label associated with the Supervisor earlier to match it for the region. This binds our supervisor to the amaury-region region. The namespace should once again be the project that we created earlier.
% cat regionbinding.yaml apiVersion: topology.cci.vmware.com/v1alpha1 kind: RegionBinding metadata: name: amaury-region namespace: amaury-project % cat regionbindingconfig.yaml apiVersion: topology.cci.vmware.com/v1alpha1 kind: RegionBindingConfig metadata: name: amaury-region namespace: amaury-project spec: supervisorSelector: matchExpressions: - key: environment operator: In values: - testing
Once these region manifests are applied, we can turn our attention to the Supervisor Namespaces class. Our objective here is to build a Namespace class so that when the user amaury creates a Namespace on the Supervisor via Aria Automation, we have control over the resources that the user has access to. This includes defining the VM Classes, Content Libraries and Storage Classes available in the Namespace. It also allows limits to be put in place around vSphere Pods. To begin the task, a Namespace class is created and associate it with a project. There are three manifests to accomplish this, similar to what we saw with the region setup. The main things to highlight are once again in the config file. Again, we have the supervisor selector mapping to the label added earlier. This limits the Namespace class to our Supervisor cluster and not any others, but you could however have it applied to multiple Supervisors if they existed. I am adding a single storage class to the Namespace, but for VM classes and Content Libraries I used a wild card ‘*’ to pick up everything availble. You could of course tune this to your own requirements. On creating a new Namespace through Aria Automation later on, all of the VM Classes, Content Libraries and chosen storage policy will be available to the user.
% cat supervisornamespaceclass.yaml apiVersion: infrastructure.cci.vmware.com/v1alpha1 kind: SupervisorNamespaceClass metadata: name: amaury-class spec: description: supervisor namespace class parameters: - name: podCountLimit type: Integer minimum: 100 maximum: 1000 default: 500 % cat supervisornamespaceclassbinding.yaml apiVersion: infrastructure.cci.vmware.com/v1alpha1 kind: SupervisorNamespaceClassBinding metadata: name: amaury-class namespace: amaury-project spec: overrideParameters: - name: podCountLimit type: Integer const: 1000 % cat supervisornamespaceclassconfig.yaml apiVersion: infrastructure.cci.vmware.com/v1alpha1 kind: SupervisorNamespaceClassConfig metadata: name: amaury-class spec: storageClasses: - name: vsan-default-storage-policy vmClasses: - name: '*' contentSources: - name: '*' type: ContentLibrary # Below limits are an EXAMPLE! Setting them may cause unexpected behavior in your namespace # Either set reasonable limits, or remove the below section to get unlimited resources limits: - name: pod_count limit: "((parameters.podCountLimit))" supervisorSelector: matchExpressions: - key: environment operator: In values: - testing
Use the kubectl create command to deploy these objects as seen earlier. With all of the above configured and deployed, you should now be able to view the constructs that were created via Aria Automation. These are detailed in the official documentation once again. However, a good way to test the functionality is to verify that our user is able to create a new Supervisor Namespace from Aria Automation.
Create Supervisor Namespace from Aria Automation
After logging into Aria Automation, open the Service Broker view and navigate to Supervisor Namespaces. Now click on + New Supervisor Namespace.
The Namespace Class that we created earlier (amaury-class) should now be visible. Click on Create.
The project is automatically assigned a name. You can add an optional description if you wish. Note that the region also refers to the region that we created earlier and associated with our Supervisor. Click Create once again.
The new Supervisor namespace should enter an Active state after a few moments.
Click on the Namespace and you should see all of the services that are available in the namespace, such as the ability to create a new Virtual Machine using the VM Service or a new Kubernetes cluster using the TKG Service. It looks like we have successfully configured the CCI Service and our Aria Automation users can now request the vSphere with Tanzu Supervisor Cluster to provision infrastructure.
Summary
This new namespace is also visible in the Workload Management section of the vSphere Client. Everything seems to be working as expected. The CCI Service is providing integration between supervisor and Aria Automation Cloud Consumption Interface.
I’m sure at this point you are wondering when I will get to the Data Services Manager section and see how to deploy databases via DSM. The DSM team have created a Consumption Operator which, when installed on the Supervisor as a service similar to what we have seen with CCI, will do just that. In the final part of this series, part 4, I will show you how to do that.
One Reply to “Provisioning databases with Aria Automation, Cloud Consumption Interface and Data Services Manager – Part 3: CCI Config”