Site icon CormacHogan.com

Configuring vROps 7.5 Management Pack for Container Monitoring

The vROps Management Pack for Container Monitoring is something that I had been meaning to install and configure for a while now, but I just haven’t had a chance until very recently. If you didn’t know. VMware’s vRealize Operations has a Management Pack for Container Monitoring. This includes adapters for both the Pivotal Container Service (PKS) as well as Kubernetes. In my environment I had already deployed PKS which I was then using for deploying my Kubernetes clusters. I found the official documentation a little light on what exact information was required for both the PKS Adapter and the Kubernetes Adapter in the Administration > Solutions > Configuration section of the VMware vRealize Operations Management Pack for Container Monitoring. After some trial and error, I decided to write it up here for future reference.

First of all, you need to make sure that you are using a supported version of vROps. I upgraded my 6.7 release to 7.5 as I was unable to see the Kubernetes Overview dashboard on version 6.7. Note that I also reinstalled the Management Pack after upgrading to 7.5. At this point, the Kubernetes Overview dashboard was available.

To configure the Container Monitoring dashboards, login to your vRops UI, and navigate to Administration > Solutions > Configuration and select VMware vRealize Operations has a Management Pack for Container Monitoring. In the Configured Adapter Instances, there should now be an un-configured PKS Adapter and Kubernetes Adapter in the lower half of the screen, as follows:

Let’s start with the PKS adapter. Select the first ‘cogs’ icon under the “Configured Adapters Instances” to begin configuring. This will launch the following “Manage Solution” window.

Display name can be arbitrary. Also, if you have multiple PKS deployments, the PKS Instance Alias can be used to distinguish them. However, we need to login to Pivotal Ops Manager to retrieve information such as the PKS API Hostname (unless you can remember it off the top of your head), and we also need to get some credential information (which you definitely won’t be able to recall). Login to the Pivotal Operations Manager, click on the PKS tile, then Settings and select the PKS API Service. This will give you then FQDN of your API server. In my case, as shown below, it is uaa.rainpole.com.

Now I need to retrieve the appropriate Credential. the Credentials task as shown below. This will require the PKS Username (in my case this is just admin) as well as the UAA Management Admin Client’s secret. The latter piece of information is found in the Pivotal Ops Manager PKS tile once again, but this time in the Credentials Tile. Locate the PKS Uaa Management Admin Client entry, and then click on the “Link to Credential”.

Once the Credential information is displayed, copy the appropriate “secret” section.

You can now use this to populate the Manage Credential in the PKS adapter:

With the rest of the information populated, you should be able to click on TEST CONNECTION:

If every thing goes well, you should see a Test connection successful popup.

Finally, click on SAVE SETTINGS, and you should see Adapter successfully saved popup.

At this point, the PKS adapter has been configured. We can now move on to configuring the Kubernetes Adapter. However, before we do that, we need to deploy a cAdvisor DaemonSet on your Kubernetes cluster, as per the install documentation. A DaemonSet simply means that this is installed on every Kubernetes worker node in the cluster. cAdvisor (Container Advisor) is a running daemon that collects, aggregates, processes, and exports information about running containers, thus provides details about resource usage and performance characteristics. In my 4 node K8s cluster, I see 4 Pods running in the kube-system namespace:

$ kubectl get ds vrops-cadvisor -n kube-system
NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
vrops-cadvisor   4         4         4       4            4           <none>          3d13h

$ kubectl get pods -n kube-system -o wide | egrep "NAME|cadvisor"
NAME                                    READY   STATUS    RESTARTS   AGE     IP              NODE                                   NOMINATED NODE
vrops-cadvisor-97wms                    1/1     Running   0          3d13h   192.168.192.6   0a390173-f26e-4ab1-89cf-1396c6b2f746   <none>
vrops-cadvisor-lsw6n                    1/1     Running   0          3d13h   192.168.192.3   cd286848-21a9-4a80-aa17-84fad6da2a86   <none>
vrops-cadvisor-pg54r                    1/1     Running   0          3d13h   192.168.192.5   19c3b8a5-a79b-4b17-8e86-cc479dbda206   <none>
vrops-cadvisor-x5p4k                    1/1     Running   0          3d13h   192.168.192.7   2c440e84-8471-46cb-890c-127e4f183a48   <none>

I can now go ahead and configure the Kubernetes adapter, which needs to connect the cAdvisor to get statistics from the nodes and Pods.

First of all, I need to get information about the Master URL. This is a reference to the Master node in my K8s Cluster. Use the PKS cli command to get this information.

$ pks clusters

Name            Plan Name  UUID                                  Status     Action
k8s-cluster-01  small      438efd88-494c-4cf0-a0bb-60e886549a3f  succeeded  UPDATE

$ pks cluster k8s-cluster-01

Name:                     k8s-cluster-01
Plan Name:                small
UUID:                     438efd88-494c-4cf0-a0bb-60e886549a3f
Last Action:              UPDATE
Last Action State:        succeeded
Last Action Description:  Instance update completed
Kubernetes Master Host:   pks-cluster-01
Kubernetes Master Port:   8443
Worker Nodes:             4
Kubernetes Master IP(s):  192.168.191.63
Network Profile Name:

I can see the Kubernetes Master IP. I also have this in my DNS as pks-cluster-01.rainpole.com. The full Master URL is therefore: http://pks-cluster-01.rainpole.com:8443. Of course, you also need to ensure that your vROps can reach this master IP.

The other entries that need to be populated are the cAdvisor service (which we deployed as a DaemonSet) and the cAdvisor Port for the DaemonSet which is 31194. This brings us to the Credential section, which is looking for credentials in order to access the Kubernetes cluster. This should already be configured and available in ~/.kube/config on the host from which you manage your PKS/K8s clusters. If you have not already created this file using the pks get-credentials command, you can do so as follows.

$ pks get-credentials k8s-cluster-01

Fetching credentials for cluster k8s-cluster-01.
Context set for cluster k8s-cluster-01.

You can now switch between clusters by using:
$kubectl config use-context <cluster-name>

Now in ~/.kube/config there will be credentials available that can be used to populate the Credentials field for the Kubernetes Adapter. In my environment, my K8s credentials were provided as tokens, so I selected Token Auth and copied the name and token from ~/.kube/config to the Manage Credential. The Kubernetes Adapter configuration should now look something like this:

As before, click on the TEST CONNECTION. You may received an untrusted certificate popup – if so, just click Accept. Hopefully you will have received a Test connection successful message. If not, verify that the cAdvisor DaemonSet is running, that the URL and credentials have been added correctly (remember https and port 8443 in URL) and that you can reach the K8s Master from your vROps deployment. Finally, save the Adapter settings, and close the configuration window. In a short space of time the Collection State should change to Collecting, and the Collection Status should changed to Data receiving, as shown below.

You may need to Refresh the Solutions view, but soon you should see the vROps Management Pack for Container Monitoring change Adapter Status to Data Receiving as well. Now you should be able to select the Dashboards > All Dashboards > Kubernetes Overview. In a short space of time, information related to nodes, namespaces, Pods, containers and services should begin to populate. Here is what I could observe in my dashboard in under 5 minutes. This is upper part of the dashboard, which focuses primarily on cluster overview details, and events.

The lower half of the dashboard gives more detailed information, and allows and admin to look more closely at individual node and Pod metrics. Note that you have to double click on an individual metric in widgets 9 and 14 in order for the graphs to appear in widgets 10 and 15 respectively.

Happy monitoring!

Exit mobile version