Moving a Stateful App from VCP to CSI based Kubernetes cluster using Velero

Since the release of the vSphere CSI driver in vSphere 6.7U3, I have had a number of requests about how we plan to migrate applications between Kubernetes clusters that are using the original in-tree vSphere Cloud Provider (VCP) and Kubernetes clusters that are built with the new vSphere CSI driver. All I can say at this point in time is that we are looking at ways to seamlessly achieve this at some point in the future, and that the Kubernetes community has a migration design in the works to move from in-tree providers to the new CSI driver as well.…

Introducing vSphere Cloud Native Storage (CNS)

I’m delighted to be able to share with you that, coinciding with the release of vSphere 6.7 U3, VMware have also announced Cloud Native Storage (CNS). CNS builds on the legacy of the earlier vSphere Cloud Provider (VCP) for Kubernetes, and along with a new release of the Container Storage Interface (CSI) for vSphere and Cloud Provider Interface (CPI) for vSphere, CNS aims to improve container volume management and provide deep insight into how container applications running on top of vSphere infrastructure are consuming the underlying vSphere Storage. Now, there may be a lot of unfamiliar terminology in that opening…

Kubernetes on vSphere 101 – Services

This will be last article in the 101 series, as I think I have covered off most of the introductory storage related items at this point. One object that came up time and again during the series was services. While not specifically a storage item, it is a fundamental building block of Kubernetes applications. In the 101 series, we came across a “headless” service with the Cassandra StatefulSet demo. This was where service type ClusterIP was set to None. When we started to look at ReadWriteMany volumes, we used NFS to demonstrate these volumes in action. In the first NFS…

Kubernetes Storage on vSphere 101 – Failure Scenarios

We have looked at quite a few scenarios when Kubernetes is running on vSphere, and what that means for storage. We looked at PVs, PVC, PODs, Storage Classes, Deployments and ReplicaSets, and most recently we looked at StatefulSets. In a few of the posts we looked at some controlled failures, for example, when we deleted a Pod from a Deployment or from a StatefulSet. In this post, I wanted to look a bit closer at an uncontrolled failure, say when a node crashes. However, before getting into this in too much details, it is worth highlighting a few of the…

Kubernetes Storage on vSphere 101 – StatefulSet

In my last post we looked at creating a highly available application that used multiple Pods in Kubernetes with Deployments and ReplicaSets. However, this was only focused on Pods.  In this post, we will look at another way of creating highly available applications through the use of StatefulSets. The first question you will probably have is what is the difference between a Deployment (with ReplicaSets) and a StatefulSet. From a high level perspective, conceptually we can consider that the major difference is that a Deployment is involved in maintaining the desired number of Pods available for an application, whereas a…

Kubernetes Storage on vSphere 101 – Deployments and ReplicaSets

In my previous 101 posts on Kubernetes Storage on vSphere, we saw how to create “static” persistent volumes (PVs) by mapping an existing virtual machine disk (VMDK) directly into a persistent volume (PV) manifest YAML file. We also saw that we could dynamically instantiate PVs through the use of a StorageClass. We saw how a StorageClass can also be used to apply features of the underlying vSphere storage, such as a storage policy, to a PV and how Pods can consume both static or dynamic PVs through the use of persistent volume claims (PVCs). However in both previous exercises, we…

Kubernetes Storage on vSphere 101 – StorageClass

In the first 101 post, we talked about persistent volumes (PVs), persistent volumes claims (PVCs) and PODs (a group of one or more containers). In particular, we saw how with Kubernetes on vSphere, a persistent volume is essentially a VMDK (virtual machine disk) on a datastore. In that first post, we created a static VMDK on a vSAN datastore, then built manifest files (in our case YAML) for a PV, a persistent volume claim (PVC) and finally a Pod, and showed how to map that static preexisting VMDK directly to the Pod, so that it could be mounted. We saw…