A closer look at the v1alpha2 TanzuKubernetesCluster format in vSphere with Tanzu
Today I wanted to take a closer look at the new Tanzu Kubernetes Cluster YAML format (v1alpha2) which extends the configurability of TKG clusters that are deployed via the TKG Service (TKGS) in vSphere with Tanzu. We will look at this from two viewpoints. The first is to show you the differences when it comes to creating a new TKG cluster, as there are a number of different manifest settings now required with the v1alpha2 format. The second viewpoint is to look at how to upgrade the Tanzu Kubernetes Release (tkr) on an existing cluster which has been upgraded from the v1alpha1 to the v1alpha2 format. The procedure is a bit different to how we upgraded v1alpha1 clusters in the past. This format upgrade to v1alpha2 happens automatically when the Supervisor Cluster is upgraded to a version that supports the v1alpha2 format (e.g. v1.21.0), but the actual release or version of Kubernetes running in the cluster still has to be manually updated.
Creating a new v1alpha2 format TKG cluster
The easiest way to demonstrate the changes is to show the manifests for a v1alpha2 format, and compare it to the older v1alpha1 format. I have put some simple examples of manifests side by side below:
v1alpha2 format | v1alpha1 format |
apiVersion: run.tanzu.vmware.com/v1alpha2 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-01 namespace: devops1 spec: topology: controlPlane: replicas: 1 vmClass: guaranteed-small storageClass: vsan-default-storage-policy tkr: reference: name: v1.20.7---vmware.1-tkg.1.7fb9067 nodePools: - name: worker-pool-1 replicas: 2 vmClass: guaranteed-small storageClass: vsan-default-storage-policy tkr: reference: name: v1.20.7---vmware.1-tkg.1.7fb9067 |
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-02 spec: topology: controlPlane: count: 1 class: guaranteed-small storageClass: vsan-default-storage-policy workers: count: 2 class: guaranteed-small storageClass: vsan-default-storage-policy distribution: version: v1.20.7 |
Some of the major differences to highlight are:
- apiVersion: v1alpha2 replaces v1alpha1
- ability to specify a namespace in the metadata
- spec.topology.controlPlane.replicas replaces spec.topology.controlPlane.count
- vmClass replaces class for virtual machine class type
- A new spec.topology.controlPlane.tkr entry, which specifies the release/distribution
- spec.topology.workers replaces by spec.topology.nodePools
- The deprecation of distribution.version
Note that in the current release, the tkr.reference.name fields must match in both the controlPlane and in the nodePools sections. In the future, different Tanzu Kubernetes releases for node pools may be supported, so some of this new format is future-proofing.
Before you can deploy a new TKC via the TKG Service, the Virtual Machine Class and Storage Class that are specified in the cluster manifest must be added to the namespace where the cluster is being deployed. Here is a view of the “devops1” namespace in my environment, where both the VM Class and StorageClass have been added or bound already.
The command line can also be used to validate that the parameters that we wish to use to create the cluster are available. All available virtual machine classes, storage classes and the virtual machine classes bound to the namespace can be displayed, as follows:
% kubectl get virtualmachineclass NAME CPU MEMORY AGE best-effort-2xlarge 8 64Gi 81d best-effort-4xlarge 16 128Gi 81d best-effort-8xlarge 32 128Gi 81d best-effort-large 4 16Gi 81d best-effort-medium 2 8Gi 231d best-effort-small 2 4Gi 231d best-effort-xlarge 4 32Gi 81d best-effort-xsmall 2 2Gi 81d guaranteed-2xlarge 8 64Gi 81d guaranteed-4xlarge 16 128Gi 81d guaranteed-8xlarge 32 128Gi 81d guaranteed-large 4 16Gi 231d guaranteed-medium 2 8Gi 231d guaranteed-small 2 4Gi 81d guaranteed-xlarge 4 32Gi 81d guaranteed-xsmall 2 2Gi 81d % kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE vsan-default-storage-policy csi.vsphere.vmware.com Delete Immediate true 253d % kubectl get vmclassbinding -n devops1 NAME VIRTUALMACHINECLASS AGE guaranteed-small guaranteed-small 2m17s
The available Tanzu Kubernetes Releases can also be queried using the tanzukubernetesrelease directive, or tkr for short. As we can see, the latest version is now v1.20.7, which is what we placed in the manifest above.
% kubectl get tkr NAME VERSION READY COMPATIBLE CREATED UPDATES AVAILABLE v1.16.12---vmware.1-tkg.1.da7afe7 1.16.12+vmware.1-tkg.1.da7afe7 True True 22h [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837] v1.16.14---vmware.1-tkg.1.ada4837 1.16.14+vmware.1-tkg.1.ada4837 True True 22h [1.17.17+vmware.1-tkg.1.d44d45a] v1.16.8---vmware.1-tkg.3.60d2ffd 1.16.8+vmware.1-tkg.3.60d2ffd False False 22h [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837] v1.17.11---vmware.1-tkg.1.15f1e18 1.17.11+vmware.1-tkg.1.15f1e18 True True 22h [1.18.15+vmware.1-tkg.2.ebf6117 1.17.17+vmware.1-tkg.1.d44d45a] v1.17.11---vmware.1-tkg.2.ad3d374 1.17.11+vmware.1-tkg.2.ad3d374 True True 22h [1.18.15+vmware.1-tkg.2.ebf6117 1.17.17+vmware.1-tkg.1.d44d45a] v1.17.13---vmware.1-tkg.2.2c133ed 1.17.13+vmware.1-tkg.2.2c133ed True True 22h [1.18.15+vmware.1-tkg.2.ebf6117 1.17.17+vmware.1-tkg.1.d44d45a] v1.17.17---vmware.1-tkg.1.d44d45a 1.17.17+vmware.1-tkg.1.d44d45a True True 22h [1.18.15+vmware.1-tkg.2.ebf6117] v1.17.7---vmware.1-tkg.1.154236c 1.17.7+vmware.1-tkg.1.154236c True True 22h [1.18.15+vmware.1-tkg.2.ebf6117 1.17.17+vmware.1-tkg.1.d44d45a] v1.17.8---vmware.1-tkg.1.5417466 1.17.8+vmware.1-tkg.1.5417466 True True 22h [1.18.15+vmware.1-tkg.2.ebf6117 1.17.17+vmware.1-tkg.1.d44d45a] v1.18.10---vmware.1-tkg.1.3a6cd48 1.18.10+vmware.1-tkg.1.3a6cd48 True True 22h [1.19.7+vmware.1-tkg.2.f52f85a 1.18.15+vmware.1-tkg.2.ebf6117] v1.18.15---vmware.1-tkg.1.600e412 1.18.15+vmware.1-tkg.1.600e412 True True 22h [1.19.7+vmware.1-tkg.2.f52f85a 1.18.15+vmware.1-tkg.2.ebf6117] v1.18.15---vmware.1-tkg.2.ebf6117 1.18.15+vmware.1-tkg.2.ebf6117 True True 22h [1.19.7+vmware.1-tkg.2.f52f85a] v1.18.5---vmware.1-tkg.1.c40d30d 1.18.5+vmware.1-tkg.1.c40d30d True True 22h [1.19.7+vmware.1-tkg.2.f52f85a 1.18.15+vmware.1-tkg.2.ebf6117] v1.19.7---vmware.1-tkg.1.fc82c41 1.19.7+vmware.1-tkg.1.fc82c41 True True 22h [1.20.7+vmware.1-tkg.1.7fb9067 1.19.7+vmware.1-tkg.2.f52f85a] v1.19.7---vmware.1-tkg.2.f52f85a 1.19.7+vmware.1-tkg.2.f52f85a True True 22h [1.20.7+vmware.1-tkg.1.7fb9067] v1.20.2---vmware.1-tkg.1.1d4f79a 1.20.2+vmware.1-tkg.1.1d4f79a True True 22h [1.20.7+vmware.1-tkg.1.7fb9067] v1.20.2---vmware.1-tkg.2.3e10706 1.20.2+vmware.1-tkg.2.3e10706 True True 22h [1.20.7+vmware.1-tkg.1.7fb9067] v1.20.7---vmware.1-tkg.1.7fb9067 1.20.7+vmware.1-tkg.1.7fb9067 True True 22h
To create a new cluster, simply run the kubectl apply -f command that we used before, and if all goes well, the cluster should get created.
% kubectl apply -f tanzucluster-v1alpha2-v1.20.7.yaml tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-03 created % kubectl get tanzukubernetesclusters -n devops1 NAME CONTROL PLANE WORKER TKR NAME AGE READY TKR COMPATIBLE UPDATES AVAILABLE tkg-cluster-03 1 2 v1.20.7---vmware.1-tkg.1.7fb9067 37m True True
OK – so that demonstrates how to create a new cluster. What about upgrading an existing cluster?
Upgrading a v1alpha2 format TKG cluster
After upgrading the Supervisor Cluster in a vSphere with Tanzu environment, the TKG clusters are automatically upgraded to the v1alpha2 format. In my environment, I upgraded vSphere to version 7.0U3c and the Supervisor Cluster to version 1.21.0. However, this process does not upgrade the release version (tkr). This is still a manual step, but the fields that need to be updated are now different when compared to previous upgrades of a v1alpha1 format cluster. In a v1alpha1 cluster, you would change the following fields from:
spec: distribution: fullVersion: v1.20.2+vmware.1-tkg.2.3e10706 version: v1.20.2
to something like:
spec: distribution: fullVersion: null version: v1.20.7
And this would automatically trigger a rolling update of the TKG cluster to the new version. Note that this is the procedure to follow if the Supervisor Cluster has not been upgraded to v1.21.0 and the TKG clusters have not been converted to v1alpha2 format, but are still at a v1alpha1 format.
With the new v1alpha2 format, you need to change the tkr.reference.name fields in both the controlPlane and in the nodePools section. So let’s say I wanted to upgrade my TKG cluster to v1.20.7. After running the kubectl edit tanzukubernetescluster command, I would have to change the following fields (some other fields are truncated):
. . topology: controlPlane: replicas: 1 storageClass: vsan-default-storage-policy tkr: reference: name: v1.20.7---vmware.1-tkg.1.7fb9067 <<< change here vmClass: guaranteed-small nodePools: - name: workers replicas: 3 storageClass: vsan-default-storage-policy tkr: reference: name: v1.20.7---vmware.1-tkg.1.7fb9067 <<< and here vmClass: best-effort-small
This would then trigger the rolling upgrade as before. There are some nuances around this, as you can remove tkr references from some areas, and also fall back to using the deprecated distribution.version option if you wish. There is a great write-up in the official docs around how to do this if you are interested.