Kubernetes Storage on vSphere 101 – The basics: PV, PVC, POD

I’ve just returned from KubeCon 2019 in Barcelona, and was surprised to see such a keen interest in how Kubernetes consumed infrastructure related resources, especially storage. Although I have been writing about a lot of Kubernetes related items recently, I wanted to put together a primer on some storage concepts that might be useful as a stepping stone or even on-boarding process to some of you who are quite new to Kubernetes. I am going to talk about this from the point of view of vSphere and vSphere storage. Thus I will try to map vSphere storage constructs such as…

Pivotal and Harbor – x509 certificate issues

After deploying and configuring the Harbor tile in Pivotal Ops Manager, I ran into a couple of issues with certificates. The first was encountered when I was  trying to login to harbor from an Ubuntu VM where I was running all of my PKS and BOSH commands. It was also the VM where I pulled my container  images, and the VM from which I now wanted to push them into Harbor. Harbor is our registry server for storing container images. Here is what I got on trying to login: cormac@pks-cli:~$ sudo docker login -u admin harbor.rainpole.com Password: Error response from…

A primer on First Class Disks/Improved Virtual Disks

A First Class Disk (FCD), also referred to as Improved Virtual Disk (IVDs), is one of the more recent features in vSphere that may have escaped your notice. FCDs were created to address a particular gap that we have in vSphere at this time. We are well aware that within a vSphere environment, it is currently very difficult to manage virtual disks unless they are associated with a virtual machine. A simple example would be snapshots. Snapshots work at a per VM basis, and to only snapshot a single VMDK rather than all VMDK attached to a VM involves a…

PKS Revisited – Project Hatchway / K8s vSphere Cloud Provider review

As I am going to be doing some talks around next-gen applications at this year’s VMworld event, I took the opportunity to revisit Pivotal Container Services (PKS) to take a closer look at how we can set persistent volumes on container based applications. Not only that, but I also wanted to leverage the vSphere Cloud Provider feature which is part of our Project Hatchway initiative. I’ve written about Project Hatchway a few times now, but in a nutshell this allows us to create persistent container volumes on vSphere storage, and at the same time set a storage policy on the…

A first look at vFile – Sharing a persistent volume between containers

Regular readers will have noticed that I have been doing a bit of work recently with docker swarm, and what you need to do to get it to work on VMs running on vSphere. The reason why I had taken such an interest is because I wanted to look at a new product that our Project Hatchway team have been cooking up, namely vFile. In a nutshell, vFile provides simultaneous, persistent volume access between nodes in the same Docker Swarm cluster. In some ways, it can be thought of as an extension to vDVS, the vSphere Docker Volume Service (from…

Validating overlay network when docker swarm running on Centos VMs on vSphere

I got a chance to revisit my docker swarm deployment this week after a bit of a break. I was a little curious about my setup because when I spoke to some of our ‘Project Hatchway‘ engineers, I was told that I should be able to launch a single instance of Nginx in Docker Swarm (“docker service create –replicas 1 -p 8080:80 –name web nginx”) and I should be able to access the web service using the following command from any swarm node – “curl 127.0.0.1:8080”. This was not what I was seeing. When I launched the Nginx service, the…

Simple test for Docker Swarm functionality with Photon OS

After highlighting how easy it is to run docker swarm in Photon OS, I had a follow on question on how easy it would be to test the functionality. Just to recap, the only additional step you need to get Docker Swarm running on Photon OS was to open port 2377 on the master node. After that, you simply initialize the master, and all the other nodes/VMs are added as swarm workers. You might be wondering if you need to do a bunch of other stuff in iptables for docker, but the answer is no (for this relatively simple test…