In this post, I am going to share another short video that I made which highlights the main steps involved when deploying vSphere with Kubernetes from VCF 4.0 SDDC Manager. You can find the complete steps here in this previous post which shows how to deploy vSphere with Kubernetes in a Workload Domain. The video will talk you through the validation steps that are done in SDDC Manager, and then show you the complete vSphere with Kubernetes deployment in the vSphere UI. We will see the configuration changes that are made to NSX-T during the process as well. At the…
A little while back, I wrote a post about what steps are involved in automatically deploying an NSX-T 3 Edge Cluster in VMware Cloud Foundation 4.0. I also though that it might be useful to show the steps involved in a very short video (less than 4 minutes in length). Automatic deployment of NSX-T 3 Edge clusters in VCF 4.0 is a really nice new feature, and those of us who have gone through the manual process of creating NSX-T Edge clusters can testify. Check out the video on YouTube here:
This is something I noticed in the vSphere 7.0 host client. The actions button in the host client for the System > Time & date isn’t working. This means that we have to find an alternate method to enable NTP on the stand-alone host. What we will need to do is the following: Configure NTP startup policy and NTP server(s) Enable the NTP port in the Firewall Rules Start the NTP service manually Verify NTP is working 1. Configure NTP startup policy and NTP server To begin, select the correct NTP service startup policy and NTP server from the System…
Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. [Update] Whilst guest cluster isn’t an official name for the Tanzu Kubernetes cluster, I’ll use it in this post to differentiate it from the Supervisor cluster deployed with vSphere with Kubernetes. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may…
In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…
A few weeks back, just after the vSphere 7.0 launch event, I wrote an article about Native File Services in vSAN 7.0. I had a few questions asking why we decided on NFS support in this initial release, and not something like SMB or some other protocol. The reason is quite straight-forward. We are positioning vSAN as a platform for both traditional virtual machine workloads and newer containerized workloads. We chose NFS to address a storage requirement in Kubernetes, namely a way to share Persistent Volumes between Pods. To date, the vSphere CSI driver only provisioned block based Persistent Volumes…
At this point, we have a fully configured workload domain which includes an NSX-T Edge deployment. Check here for the previous VCF 4.0 deployment steps. We are now ready to go ahead and deploy vSphere with Kubernetes, formerly known as Project Pacific. Via SDDC Manager in VMware Cloud Foundation 4.0, we ensure that an NSX-T Edge is available, and we also ensure that the the Workload Domain is sufficiently licensed to enable vSphere with Kubernetes. Disclaimer: “To be clear, this post is based on a pre-GA version of the VMware Cloud Foundation 4.0. While the assumption is that not much…