vSphere 7.0, Cloud Native Storage, CSI and vVols support

With the release of vSphere 7.0, we also announced enhancements to our Cloud Native Storage (CNS) offering. One of the new features that we now offer in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via our updated version of the vSphere Container Storage Interface (CSI) driver. In this post, I will walk through the steps involved in consuming vVols via Kubernetes manifest files when dynamically provisioning PVs. I will also show some enhancements to our CNS UI in vSphere 7.0 so that you can easily identify vVol backed PVs. Step 1…

Building a TKG Cluster in vSphere with Kubernetes

Now that we have our vSphere with Kubernetes deployed, we take the next logical step in this post and deploy a Tanzu Kubernetes Grid (TKG) guest cluster. [Update] Whilst guest cluster isn’t an official name for the Tanzu Kubernetes cluster, I’ll use it in this post to differentiate it from the Supervisor cluster deployed with vSphere with Kubernetes. TKG is a full CNCF certified Kubernetes distribution. It is deployed as a set of virtual machines, in accordance with a TanzuKubernetesCluster manifest which we will look at later. The OS and K8s distribution is also specified in the manifest. There may…

A first look at vSphere with Kubernetes in action

In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD). When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes. Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a…

Read-Write-Many Persistent Volumes with vSAN 7 File Services

A few weeks back, just after the vSphere 7.0 launch event, I wrote an article about Native File Services in vSAN 7.0. I had a few questions asking why we decided on NFS support in this initial release, and not something like SMB or some other protocol. The reason is quite straight-forward. We are positioning vSAN as a platform for both traditional virtual machine workloads and newer containerized workloads. We chose NFS to address a storage requirement in Kubernetes, namely a way to share Persistent Volumes between Pods. To date, the vSphere CSI driver only provisioned block based Persistent Volumes…

Native File Services for vSAN 7

On March 10th 2020, we saw a plethora of VMware announcements around vSphere 7.0, vSAN 7.0, VMware Cloud Foundation 4.0 and of course the Tanzu portfolio. The majority of these announcements tie in very deeply with the overall VMware company vision which is any application on any cloud on any device. Those applications have traditionally been virtualized applications. Now we are turning our attention to newer, modern applications which are predominantly container based, and predominantly run on Kubernetes. Our aim is to build a platform which can build, run, manage, connect and protect both traditional virtualized applications and modern containerized…

Deploying flannel, vSphere CPI and vSphere CSI with later versions of Kubernetes

I recently wanted to deploy a newer versions of Kubernetes to see it working with our Cloud Native Storage (CNS) feature. Having assisted with the original landing pages for CPI and CSI, I’d done this a few times in the past. However, the deployment tutorial that we used back then was based on Kubernetes version 1.14.2. I wanted to go with a more recent build of K8s, e.g. 1.16.3. By the way, if you are unclear about the purposes of the CPI and CSI, you can learn more about them on the landing page, here for CPI and here for…

vtopology – Insights into vSphere infrastructure from kubectl

As I got more and more familiar with running Kubernetes on top of vSphere, I came to the realization that it might be useful to be able to query the vSphere Infrastructure from Kubernetes, particularly via kubectl. For example, I might like to know some of the details about the master nodes and worker nodes (e.g. which ESXi host are they on?, how much resources are they consuming?). Also, if I have a persistent volume, how can I query which vSphere datastore is it on, which policy is it using, what is the path to the VMDK? Therefore I started…