Hands-On with VMware Tanzu Community Edition - vSphere Option

Hands-On with
VMware Tanzu Community Edition
vSphere Option








(Image Credit: This Page)

Why

VMware Tanzu Community Edition (TCE) was just released the other day. Tanzu is one of the best heterogenous, multi-cloud Kubernetes platforms today. Now that there is a free version, I wanted to install TCE in my Lab and get some hands-on. For this hands-on, I will install TCE on vSphere as well as AWS (next post) and share my experience.

How

I started with William Lam's excellent post which is really all you need. Well, that and a vSphere or cloud environment to host TCE. My laptop is a Mac, so I will use the MacOS-specific instructions. That leads us to this page of instructions on GitHub to get the Tanzu CLI installed on my Mac. 

Install Tanzu Prerequisites

Instruction on how to install Docker and kubectl are on this TCE community page.

Docker: Download and install .DMG file 

brew install kubectl 

Install Tanzu CLI on MacOS

Three Commands:

1) brew tap vmware-tanzu/tanzu
2) brew install tanzu-community-edition
3) ${HOMEBREW_EXEC_DIR}/configure-tce.sh

$ brew tap vmware-tanzu/tanzu  
==> Tapping vmware-tanzu/tanzu
Cloning into '/usr/local/Homebrew/Library/Taps/vmware-tanzu/homebrew-tanzu'...
remote: Enumerating objects: 82, done.
remote: Counting objects: 100% (82/82), done.
remote: Compressing objects: 100% (66/66), done.
remote: Total 82 (delta 35), reused 37 (delta 12), pack-reused 0
Unpacking objects: 100% (82/82), done.
Tapped 1 formula (105 files, 96KB).

$ brew install tanzu-community-edition
==> Downloading https://github.com/vmware-tanzu/community-edition/releases/download/v0.9.1/tce-darwin-amd64-v0.9.1.tar.gz
Already downloaded: /Users/faucherd/Library/Caches/Homebrew/downloads/57aa013fd7f19014e9345d7c2e77fcd37ec0dfcf82221f04bfb406d1079a71d2--tce-darwin-amd64-v0.9.1.tar.gz
==> Installing tanzu-community-edition from vmware-tanzu/tanzu
==> Thanks for installing Tanzu Community Edition!
==> The Tanzu CLI has been installed on your system
==> 


==> ******************************************************************************
==> * To initialize all plugins required by Tanzu Community Edition, an additional
==> * step is required. To complete the installation, please run the following
==> * shell script:
==> *
==> * /usr/local/Cellar/tanzu-community-edition/v0.9.1/libexec/configure-tce.sh
==> *
==> ******************************************************************************
==> 



==> * To cleanup and remove Tanzu Community Edition from your system, run the
==> * following script:
==> /usr/local/Cellar/tanzu-community-edition/v0.9.1/libexec/uninstall.sh
==> 


🍺  /usr/local/Cellar/tanzu-community-edition/v0.9.1: 15 files, 642.6MB, built in 12 seconds

$ /usr/local/Cellar/tanzu-community-edition/v0.9.1/libexec/configure-tce.sh
MY_DIR: /usr/local/Cellar/tanzu-community-edition/v0.9.1/libexec
/Users/faucherd/Library/Application Support
Removing old plugin cache from /Users/faucherd/.cache/tanzu/catalog.yaml
Making a backup of your Kubernetes config files into /tmp
| initializing ✔  successfully initialized CLI 
Installation complete!

Deploy a Kubernetes Cluster on vSphere from MacOS with TCE 

This TCE documentation page is the best guide from here on out.

Create a Tanzu Kubernetes VM Template in vSphere

Download a Photon or Ubuntu OVA as the base for your template from here. I tried both and was more successful with Ubuntu. YMMV.

Once the OVA is deployed to vSphere, convert the VM to a vSphere template. You can follow the instructions in that page ^^ if you need any help with the template steps.

Start the Tanzu Community Edition Installer

(NB: Start | install/start Docker on your Mac before running the next command. I did not do that and felt like an idiot.)

$ tanzu management-cluster create --ui

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080

If everything is working, the Welcome to the Tanzu Community Edition Installer will automagically be loaded in a new browser tab on your Mac


Choose the VMware vSphere Deploy button

Step 1: IaaS Provider

Fill in your vCenter credentials and click Connect


Verify the vCenter SSL thumbprint and click Continue



Paste in the RSA public key that you will use to ssh to the Kubernetes components running in vSphere. If you do not have an RSA private/public key pair already, just run these commands:

$ ssh-keygen -t rsa               
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/faucherd/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/faucherd/.ssh/id_rsa.
Your public key has been saved in /Users/faucherd/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ChangeMeO1NcPBKWyEVTrgzj39LO5dRx9YGOTQIOKhE faucherd@dennis-mbp
The key's randomart image is:
+---[RSA 3072]----+
|   E.. =*+.      |
|    . Boo=.   .  |
|   . .= o.=. o ..|
|    .. O = .*   +|
|      o S .. o. o|
|       Scary . o |
|      B = o o .  |
|     . * + +     |
|      ... o .    |
+----[SHA256]-----+

$ pbcopy < ~/.ssh/id_rsa.pub

Now your public key is in your Clipboard buffer and you can paste it into the Installer "SSH Public Key" field (I hid some of mine)

Choose a Datacenter and click Next



Step 2: Standalone Cluster Settings

Choose your instance type. I chose the small development type for testing.


Choose a name for your cluster, a Load Balancer type (I don't have NSX installed), and an available static IP address for the load balancer.


Click Next

Step 3: VMware NSX Advanced Load Balancer


I do not have NSX installed, so I am skipping this step by clicking Next

Step 4: Metadata




In the interest of brevity, I am skipping this step as well by clicking Next

Step 5: Resources



Choose your VM folder for your Kubernetes nodes from the drop down, choose your datastore for your Kubernetes nodes from the drop down, choose which hosts or clusters to deploy the nodes to and click Next.

Step 6: Kubernetes Network



Choose your VM network from the drop down, leave the default internal only CIDRs or choose your own, define your proxy if one is needed to reach the external network and click Next.

Step 7: Identity Management



For a development environment, you can disable identity management and click Next

Step 8: OS Image


Choose the kubernetes VM template you created from the drop down and click Next then click Review Configuration.

Review Configuration


The Review Configuration page will display all the data you entered and also display the tanzu command that will be run and the custom YAML path that will be used as input.

Click Deploy Standalone Cluster

Deploy





Now, things will start happening. You can track the progress in the Installer window and watch vSphere tasks get kicked off in the vCenter UI.




Success!



$ tanzu management-cluster create --ui

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Identity Provider not configured. Some authentication features won't work.
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider vsphere:v0.7.10
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /home/dennis/.kube-tkg/tmp/config_FpQ4TOgL
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v1.1.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.23" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.10" TargetNamespace="capv-system"
Start creating management cluster...
Saving management cluster kubeconfig into /home/dennis/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v1.1.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.23" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.10" TargetNamespace="capv-system"
Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Waiting for additional components to be up and running...
Waiting for packages to be up and running...
Context set for management cluster blog-tce as 'blog-tce-admin@blog-tce'.

Management cluster created!


Validate the Kubernetes Cluster

I waited for the CPU and Disk activity to quiet down on the two new Ubuntu Kubernetes VMs and then ran this command:

$ tanzu management-cluster get
  NAME      NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       
  blog-tce  tkg-system  running  1/1           1/1      v1.21.2+vmware.1  management  


Details:

NAME                                                         READY  SEVERITY  REASON  SINCE  MESSAGE
/blog-tce                                                    True                     9m7s          
├─ClusterInfrastructure - VSphereCluster/blog-tce            True                     9m13s         
├─ControlPlane - KubeadmControlPlane/blog-tce-control-plane  True                     9m7s          
│ └─Machine/blog-tce-control-plane-v7xb7                     True                     9m11s         
└─Workers                                                                                           
  └─MachineDeployment/blog-tce-md-0                                                                 
    └─Machine/blog-tce-md-0-77f6f6c86c-876hz                 True                     9m12s         


Providers:

  NAMESPACE                          NAME                    TYPE                    PROVIDERNAME  VERSION  WATCHNAMESPACE  
  capi-kubeadm-bootstrap-system      bootstrap-kubeadm       BootstrapProvider       kubeadm       v0.3.23                  
  capi-kubeadm-control-plane-system  control-plane-kubeadm   ControlPlaneProvider    kubeadm       v0.3.23                  
  capi-system                        cluster-api             CoreProvider            cluster-api   v0.3.23                  
  capv-system                        infrastructure-vsphere  InfrastructureProvider  vsphere       v0.7.10  

Create a Kubernetes Workload Cluster for Applications

Now that your Kubernetes management cluster is up and running, let's install a workload cluster for applications

Set your kubectl context to the management cluster

$ kubectl config use-context blog-tce-admin@blog-tce
Switched to context "blog-tce-admin@blog-tce".

Validate Access to Kubernetes

$ kubectl get nodes
NAME                                              STATUS   ROLES                  AGE   VERSION
blog-tce-control-plane-v7xb7.fios-router.home     Ready    control-plane,master   33m   v1.21.2+vmware.1
blog-tce-md-0-77f6f6c86c-876hz.fios-router.home   Ready    <none>                 24m   v1.21.2+vmware.1

Find Management Cluster YAML Name and Make a Copy

$ ls ~/.config/tanzu/tkg/clusterconfigs
4c7ew9k3up.yaml

$ cp  ~/.config/tanzu/tkg/clusterconfigs/4c7ew9k3up.yaml ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml

Edit workload1.yaml

Update CLUSTER_NAME to a name of your choosing and VSPHERE_CONTROL_PLANE_ENDPOINT to an open IP address.

CLUSTER_NAME: blog-workload-cluster
VSPHERE_CONTROL_PLANE_ENDPOINT: 192.168.1.58

Create the Workload Cluster

tanzu cluster create blog-workload-cluster --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml




$ tanzu cluster create blog-workload-cluster --file ~/.config/tanzu/tkg/clusterconfigs/workload1.yaml
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'blog-workload-cluster'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...
Waiting for packages to be up and running...

Workload cluster 'blog-workload-cluster' created

Validate the Workload Cluster and Set Context

$ tanzu cluster list
  NAME                   NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES   PLAN  
  blog-workload-cluster  default    running  1/1           1/1      v1.21.2+vmware.1  <none>  dev 

$ kubectl config use-context blog-workload-cluster-admin@blog-workload-cluster
Switched to context "blog-workload-cluster-admin@blog-workload-cluster".

Install an Application

$ tanzu package repository add tce-repo --url projects.registry.vmware.com/tce/main:0.9.1 --namespace tanzu-package-repo-global
/ Adding package repository 'tce-repo'... 
 Added package repository 'tce-repo'

$ tanzu package repository list --namespace tanzu-package-repo-global
/ Retrieving repositories... 
  NAME      REPOSITORY                                   STATUS       DETAILS  
  tce-repo  projects.registry.vmware.com/tce/main:0.9.1  Reconciling   

(Wait a bit until "Reconciled")

$ tanzu package repository list --namespace tanzu-package-repo-global
/ Retrieving repositories... 
  NAME      REPOSITORY                                   STATUS               DETAILS  
  tce-repo  projects.registry.vmware.com/tce/main:0.9.1  Reconcile succeeded        


$ tanzu package available list (I chopped off the Description column)
- Retrieving available packages... 
  NAME                                           DISPLAY-NAME        
  cert-manager.community.tanzu.vmware.com        cert-manager        
  contour.community.tanzu.vmware.com             Contour             
  external-dns.community.tanzu.vmware.com        external-dns        
  fluent-bit.community.tanzu.vmware.com          fluent-bit          
  gatekeeper.community.tanzu.vmware.com          gatekeeper          
  grafana.community.tanzu.vmware.com             grafana             
  harbor.community.tanzu.vmware.com              Harbor              
  knative-serving.community.tanzu.vmware.com     knative-serving     
  local-path-storage.community.tanzu.vmware.com  local-path-storage  
  multus-cni.community.tanzu.vmware.com          multus-cni          
  prometheus.community.tanzu.vmware.com          prometheus                            
  velero.community.tanzu.vmware.com              velero     

$ tanzu package available list cert-manager.community.tanzu.vmware.com
/ Retrieving package versions for cert-manager.community.tanzu.vmware.com... 
  NAME                                     VERSION  RELEASED-AT           
  cert-manager.community.tanzu.vmware.com  1.3.3    2021-08-06T12:31:21Z  
  cert-manager.community.tanzu.vmware.com  1.4.4    2021-08-23T16:47:51Z  
  cert-manager.community.tanzu.vmware.com  1.5.3    2021-08-23T17:22:51Z  

$ tanzu package install cert-manager \
>   --package-name cert-manager.community.tanzu.vmware.com \
>   --version 1.5.3
/ Installing package 'cert-manager.community.tanzu.vmware.com' 
| Getting namespace 'default' 
| Getting package metadata for 'cert-manager.community.tanzu.vmware.com' 
| Creating service account 'cert-manager-default-sa' 
| Creating cluster admin role 'cert-manager-default-cluster-role' 
| Creating cluster role binding 'cert-manager-default-cluster-rolebinding' 
- Creating package resource 
/ Package install status: Reconciling 

 Added installed package 'cert-manager' in namespace 'default'

$ tanzu package installed list
/ Retrieving installed packages... 
  NAME          PACKAGE-NAME                             PACKAGE-VERSION  STATUS               
  cert-manager  cert-manager.community.tanzu.vmware.com  1.5.3            Reconcile succeeded 

Use The Octant Kubernetes Dashboard

Install & Run Octant

$ brew install octant  
==> Downloading https://ghcr.io/v2/homebrew/core/octant/manifests/0.24.0
Already downloaded: /Users/faucherd/Library/Caches/Homebrew/downloads/c88cda557037293b24671a57038df2eb97559806d4f4d977f2bd4a453173c155--octant-0.24.0.bottle_manifest.json
==> Downloading https://ghcr.io/v2/homebrew/core/octant/blobs/sha256:8e29c1b51ec3b2d1c9b5cdc0de357cd41851f4f0df42c07afe270b6078595cf4
Already downloaded: /Users/faucherd/Library/Caches/Homebrew/downloads/98db471c13b7d994f4ad3c23b59cf2b50142dc7a43893a77bf9018ee704032e2--octant--0.24.0.catalina.bottle.tar.gz
==> Pouring octant--0.24.0.catalina.bottle.tar.gz
🍺  /usr/local/Cellar/octant/0.24.0: 3 files, 157.2MB      


  
Poke around your Kubernetes cluster in Octant

Issues

I noticed that my cluster did not get an external IP (Ingress) assigned. This is most likely as I do not have NSX installed in my vSphere cluster and do not have a pool of external addresses for the Load Balancer. I have a feeling that in my next post using AWS, an external IP address will be assigned to the Load Balancer.



Thank You

Thank you for taking the time to read this Tanzu Community Edition walk-through. I hope you have found the post educational. I welcome your comments and feedback.

Comments