Announcing Tanzu Community Edition from VMware
As we head into VMworld 2021 this week, there will be many announcements about new and updated VMware products and features. However, there is one that I want to bring to your attention. It is something that I have been directly involved in, in some small way, and that something is Tanzu Community Edition. Tanzu Community Edition (sometimes referred to as TCE), is a free, open source Tanzu Kubernetes (TKG) distribution which has all of the same open source software found in our commercial editions of Tanzu. Personally, I find this to be a really cool announcement for a number of reasons, which I will briefly outline below.
Support for Docker
Probably the best reason for checking out Tanzu Community Edition is that it supports many different deployment options. For example, Tanzu Community Edition can be used to deploy a Tanzu Kubernetes cluster to your laptop/desktop – you do not need access to an vSphere environment, nor do you need access to hyperscalers such as AWS of Azure to try it out. So long as you have Docker on your desktop, you can stand up TCE. Not only that, but TCE provides two deployment options on Docker. You can deploy a standalone Kubernetes cluster, or you can stand-up the more common Tanzu Kubernetes configuration of a management cluster, plus X number of workload clusters, depending on the resources available on your laptop/desktop. Under the covers, this is leveraging Kind (Kubernetes in Docker), but you still gain that familiarity of working with the tanzu command line interface (CLI) for interacting with the cluster(s). Note that I stated that you did not need access to an vSphere or cloud infrastructure to use TCE. I should clarify that you can absolutely use these infrastructures with Tanzu Community Edition, should you wish to do so. For those of you who have vSphere home labs, this is an excellent opportunity to get hands-on skills with Tanzu Kubernetes. If you do plan to deploy and run TCE on Docker, pay particular attention to the resource requirements, particularly memory, as in many cases the defaults are not sufficient. The deployment is very simple, and can be driven either via the UI, or from the tanzu CLI. Here is a screenshot of a standalone deployment of TCE on Docker. It begins by offering you different infrastructures to deploy to. In this case, Docker is chosen:
There are only a few additional criteria required to deploy on Docker, such as cluster name and proxy settings (if any). Note that it also validates that you have Docker running locally on your laptop/desktop:
You can then decide whether you want to complete the deployment via the UI, or to copy and paste the actual tanzu deployment command to a command line on your laptop/desktop. Assuming you continue with the UI approach, and the deployment is successful, you will hopefully see something like this with a green “Installation complete” banner once the various images have been pulled in, and the Kind cluster / docker containers have successfully deployed. Note that depending on your available bandwidth, it may take a number of minutes to pull down the docker images required to standup the cluster. However, once the image has been pulled down to you laptop/desktop for the very first time, subsequent deployments should be much quicker.
And just to repeat, this is just one approach. TCE can be used to deploy both standalone and managed TKG clusters to Docker on your local desktop, but can also deploy fully conformant Kubernetes clusters on to vSphere, AWS and Azure.
A new way of learning and evaluating Kubernetes
As many regular readers will be aware, VMware offers free Kubernetes training through its KubeAcademy initiative. I’ve mentioned this a few times in previous blog posts on this site. With the release of Tanzu Community Edition, you now have a distribution of Tanzu Kubernetes which allows you to do your own training, testing and evaluation with VMware’s own brand of Kubernetes from your laptop or desktop. And even if you already have TKG in production, this free, open source distribution allows you to build on your existing skills and do your own research of potential applications without the risk of a “suck it and see” approach to deployments on your production clusters. You can get comfortable with the tanzu CLI for managing both clusters and packages, all from the comfort of your own laptop or desktop, or even to your own home lab vSphere environment should you have one available.
Community packages
Tanzu Community Edition comes with a whole plethora of community packages that are simple to deploy and use via the kapp-controller from Carvel. These packages include Pinniped and Dex for Identity Management, allowing the clusters to be integrated with LDAP for example. For monitoring, TCE includes a Prometheus package for log gathering/storing and Grafana for visualization. Our own Velero product is included for backup and restore of your apps. It also has Contour which provides Ingress/HTTPProxy services via Envoy. It come with fluent bit package for capturing and forwarding logs from a Kubernetes cluster, and a whole range of additional community packages. The reason this is so interesting is that you can now build and test a complete application stack in TCE before ever deploying it to your production TKG environment. Say, for example, that you were looking to standup a monitoring stack in production. There could be a lot of dependencies to get right, and also a lot of configuration to setup. You may require a cert manager that ensures secure communication between the various parts of the stack. You may want to use Contour to provide Ingress/HTTPProxy to Prometheus and Grafana. Prometheus would need to be configured as a data source for Grafana, and you may need to create some basic dashboards to display cluster metrics. That is a lot of things to get right before ever seeing your first Grafana dashboard with some cluster metrics. The great thing about the community packages is that much of this integration is done for you. Using TCE community packages you can very quickly stand up a monitoring stack (even on Docker) before ever trying it out in production. You can then examine whether the certificate configuration is working, what data is being collected, examine resource consumption so that you can size your production environment correctly, and also think about what other additional dashboard information might be required for production. We have a number of guides around how to get started with packages in the official TCE documentation so check them out before you start.
One final thing to point out is that you will see us promote Tanzu Community Edition as “batteries included, but swappable”. By that, we mean that you are not obligated to use these community packages if you do not want to. Should you wish, you can deploy a TCE cluster, and then use other package managers (such as Helm) to deploy your applications of choice. This is because the Kubernetes clusters that are getting deployed in TCE are fully conformant Kubernetes clusters. Indeed, you could use a combination of package managers such as Helm and Carvel to deploy various components that make up an application. For example, if I wanted to test the fluent-bit logging functionality in TCE, I might deploy ElasticSearch and Kibana (ELK stack) and then use the tanzu package manager to configure and deploy the fluent-bit community package to ship the logs to ELK.
Community supported
As mentioned, Tanzu Community Edition is a free, open-source Tanzu Kubernetes distribution. There are no usage limitations and it is a community supported project. If you need assistance, join the TCE google group and look for the #tanzu-community-edition slack channel in Kubernetes.
For more information about Tanzu Community Edition, check out tanzucommunityedition.io. This has the download links and the official documentation. The first public release is v0.9.1. We look forward to hearing your feedback and suggestions.
I just deployed a Management and a Workload Cluster on vSphere.
But I am missing a loadbalancer to make k8s services available to users.
Or how can I publish services with an „external“ ip address?
Load Balancer services can come from a few places.
If this is production, you might consider NSX ALB.
If it is just evaluation and testing, I have used MetalLB quite a bit.
If you just want to access the containerized app portal, you can use “kubectl port-forward”. There are examples on how to do that here: https://tanzucommunityedition.io/docs/latest/docker-monitoring-stack/