Kubernetes Storage on vSphere 101 – Deployments and ReplicaSets

In my previous 101 posts on Kubernetes Storage on vSphere, we saw how to create “static” persistent volumes (PVs) by mapping an existing virtual machine disk (VMDK) directly into a persistent volume (PV) manifest YAML file. We also saw that we could dynamically instantiate PVs through the use of a StorageClass. We saw how a StorageClass can also be used to apply features of the underlying vSphere storage, such as a storage policy, to a PV and how Pods can consume both static or dynamic PVs through the use of persistent volume claims (PVCs). However in both previous exercises, we…

Kubernetes Storage on vSphere 101 – StorageClass

In the first 101 post, we talked about persistent volumes (PVs), persistent volumes claims (PVCs) and PODs (a group of one or more containers). In particular, we saw how with Kubernetes on vSphere, a persistent volume is essentially a VMDK (virtual machine disk) on a datastore. In that first post, we created a static VMDK on a vSAN datastore, then built manifest files (in our case YAML) for a PV, a persistent volume claim (PVC) and finally a Pod, and showed how to map that static preexisting VMDK directly to the Pod, so that it could be mounted. We saw…

Kubernetes Storage on vSphere 101 – The basics: PV, PVC, POD

I’ve just returned from KubeCon 2019 in Barcelona, and was surprised to see such a keen interest in how Kubernetes consumed infrastructure related resources, especially storage. Although I have been writing about a lot of Kubernetes related items recently, I wanted to put together a primer on some storage concepts that might be useful as a stepping stone or even on-boarding process to some of you who are quite new to Kubernetes. I am going to talk about this from the point of view of vSphere and vSphere storage. Thus I will try to map vSphere storage constructs such as…

Getting started with Velero 1.0.0-rc.1

Last week, the Velero team announced the availability of release candidate (RC) version 1.0.0. I was eager to get my hands on it and try it out. Since it is RC (and not GA), I thought I would just deploy a fresh environment for testing.  The guidance from the Velero team is to test it out in your non-critical environments! On a number of Velero github sites, the links to download the binaries do not appear to be working, plus some of the install guidance is a little sparse. Anyhow, after some trial and error, I decided it might be…

Kubernetes, Hadoop, Persistent Volumes and vSAN

At VMworld 2018, one of the sessions I presented on was running Kubernetes on vSphere, and specifically using vSAN for persistent storage. In that presentation (which you can find here), I used Hadoop as a specific example, primarily because there are a number of moving parts to Hadoop. For example, there is the concept of Namenode and a Datanode. Put simply, a Namenode provides the lookup for blocks, whereas Datanodes store the actual blocks of data. Namenodes can be configured in a HA pair with a standby Namenode, but this requires a lot more configuration and resources, and introduces additional…

PKS Revisited – Project Hatchway / K8s vSphere Cloud Provider review

As I am going to be doing some talks around next-gen applications at this year’s VMworld event, I took the opportunity to revisit Pivotal Container Services (PKS) to take a closer look at how we can set persistent volumes on container based applications. Not only that, but I also wanted to leverage the vSphere Cloud Provider feature which is part of our Project Hatchway initiative. I’ve written about Project Hatchway a few times now, but in a nutshell this allows us to create persistent container volumes on vSphere storage, and at the same time set a storage policy on the…