Managing Tanzu Community Edition with vRealize Operations

 Managing Tanzu Community Edition with vRealize Operations




Why

I've been spending many hours working with VMware Tanzu Community Edition (TCE) since TCE was released. This has been an excellent education in Tanzu as well as in Kubernetes. You can find my first experience installing TCE in this blog post and this blog post. I have moved a few of my Home Lab container workloads from VMs to Kubernetes managed by TCE and I wanted to be able to manage those containers. I manage the rest of my VMware infrastructure with vRealize Operations, so I thought I would look into Kubernetes management in vRealize Operations. 

Also, the vRealize Operations 8.6.0 user interface has changed since the vRealize Operations Kubernetes Management Pack instructions were written. After spending about a week finding 100 ways to make the Kubernetes Management Pack NOT work with TCE, I found an excellent and very helpful VMware blog post on vRealize Operations and Tanzu Kubernetes Grid (TKG). This post is similar to that TKG post with a little more detail and explanation as the vROps UI has changed.

How

Parts List

  • A TCE Kubernetes Cluster 
  • vRealize Operations
  • The vRealize Operations Management Pack for Kubernetes

Really Fast Summary πŸ™‚

  • Install Tanzu Community Edition
  • kubectl apply -f vrops_cadvisor.yaml 
  • Install the latest vRealize Operations
  • Connect vRealize Operations to vCenter
  • Install the Kubernetes Management Pack into vRealize Operations
  • Add a Kubernetes Account to vRealize Operations
    • For your credential, choose "Client Certificate Auth"
    • Use these fields from your .kube/config file for TCE:
      • certificate-authority-data:
      • client-certificate-data:
      • client-key-data:
    • Enter the FQDN for your vCenter Server
Can you believe it took me a week to figure that out? πŸ™„

Install the Management Pack

I am running vRealize 8.6.0 on-premises, so I downloaded the on-premises management pack from this page on VMware Marketplace. 

Import the Pack into vRealize Operations by going to Data Sources > Integrations > Repository and choosing Add




Browse to and select your downloaded Management Pack file and select Upload


When the upload is complete, select Next

Accept the license terms and select Next. The installation will begin.


When complete, select Finish.


Install cAdvisor (Container Advisor) into Kubernetes

"cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide."

vRealize Operations Kubernetes Management Pack will collect container information from the cAdvisor Daemonset. (You can alternatively use a Prometheus implementation to provide metrics)

I have been successful with William Lam's vrops_cadvisor.yaml from the VMware blog post mentioned above.


apiVersion: apps/v1 # apps/v1beta2 in Kube 1.8, extensions/v1beta1 in Kube < 1.8
kind: DaemonSet
metadata:
  name: cadvisor
  namespace: kube-system 
  labels:
    app: cadvisor
  annotations:
      seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
  selector:
    matchLabels:
      app: cadvisor
  template:
    metadata:
      labels:
        app: cadvisor
        version: v0.31.0
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: cadvisor
        image: google/cadvisor:v0.31.0
        resources:
          requests:
            memory: 250Mi
            cpu: 250m
          limits:
            cpu: 400m
        volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
          readOnly: true
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker  #Mouting Docker volume  
          readOnly: true
        - name: disk
          mountPath: /dev/disk
          readOnly: true
        ports:
          - name: http
            containerPort: 8080 #Port exposed 
            hostPort : 31194 #Host's port - Port to expose your cAdvisor DaemonSet on each node
            protocol: TCP
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 30
      volumes:
      - name: rootfs
        hostPath:
          path: /
      - name: var-run
        hostPath:
          path: /var/run
      - name: sys
        hostPath:
          path: /sys
      - name: docker
        hostPath:
          path: /var/lib/docker #Docker path in Host System
      - name: disk
        hostPath:
          path: /dev/disk

I saved the YAML to vrops_cadvisor.yaml and ran "kubectl apply -f vrops_cadvisor.yaml" I use Octant to manage my Kubernetes clusters, so I could see that a new Daemon Set named "cadvisor" was up and running.




Clicking on cadvisor in Octant, I can see two copies running and exposing the port 31194.





We can now check out cadvisor and all its beautiful graphs and metrics at http://[cluster-ip]:31194/cluster. Cool.



Define Your vROps Kubernetes Account

Go to Data Sources > Integrations > Add Account




Choose Kubernetes.

Give your account a name and an optional description

From your ~/.kube/config copy the "server:" information into the "Master URL" field

Select "DaemonSet" for "Collector Service"

Enter "31194" for "cAdvisor Port (DaemonSet)


Choose "vRealize Operations Manager Collector-vRealize Cluster Node" in the "Collector/Group" drop-down


Click the plus sign next to Credential to create a new credential for this account.

Choose Client Certificate Auth as the Credential Kind

Name your credential

The rest of the fields come from your ~/.kube/config file for your TCE cluster

clusters:apiVersion: v1
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2akNDQWRLZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkF[SNIP]LS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.58:6443
  name: ph-small
kind: Config
preferences: {}
users:
- name: ph-small-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJQlRFN2tvaXYxdU13RFFZSkt[SNIP]gQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBOVdQTXZKeFdYUUsyQk44WHl[SNIP]4YUllb3FDbDR6WThzQkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

Copy the "certificate-authority-data:" information into the "Certificate Authority Data" field

Copy the "client-certificate-data:" information into the "Client Certificate Data" field

Copy the "client-key-data:" information into the "Client Certificate Data" field

Click OK

In the Advanced Setting section, enter the IP or FQDN of your vCenter server so that vRealize can map your Kubernetes nodes to VMs.



Click "TEST CONNECTION" 

Upon successful test, click "ADD". Your new account Status should be listed as OK



Look at Your Kubernetes Dashboards

Go to Visualize > Dashboards > Kubernetes Environment > Kubernetes Overview

Choose the Kubernetes account name you just created to see information for that cluster




This is the fun part. You can drill in to your cluster for relationships and any issues and even see the vSphere VM relationships.

Here is my Yelb namespace and its health in my TCE cluster



Here is the vSphere relationship and health for the TCE Control Plane VM


Here is information on the TCE node health and I can see one node is tight on memory as the Memory badge is yellow


Add Every K8S Cluster Type in the Known Universe!

I was so excited that TCE was up and monitored that I decided to try adding more types of Kubernetes clusters to vRealize Operations. Here is my record of success:

  • Tanzu Community Edition: Yes
  • Tanzu Kubernetes Grid: Yes
  • MicroK8S: Yes, using Bearer Token instead of Client Auth and port 16443 instead of 6443
  • K3S: No - "Invalid Integer" on Client Auth Validation for some reason
To find your Bearer Token for your MicroK8S (or any) cluster, run this command:

TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)

Thank You

Thank you for taking the time to read this post. This was quite the learning experience for me and I hope you find this post helpful and that this post saves you some time and pain. I welcome your feedback.

Comments