A first look at Velero (previously known as Ark)

Those of you who work in the cloud native space will probably be aware of VMware’s acquisition of Heptio back in December 2018. Heptio bring much expertise and a number of products to the table, one of which I was very eager to try it. This is the Heptio Velero product, previously known as Heptio Ark. Heptio Velero provides a means to back up and restore cloud native applications. Interestingly enough, they appear to be able to capture all of the deployment details, so they are able to backup the pods (compute), persistent volumes (storage) and services (networking), as well…

Pivotal and Harbor – x509 certificate issues

After deploying and configuring the Harbor tile in Pivotal Ops Manager, I ran into a couple of issues with certificates. The first was encountered when I was  trying to login to harbor from an Ubuntu VM where I was running all of my PKS and BOSH commands. It was also the VM where I pulled my container  images, and the VM from which I now wanted to push them into Harbor. Harbor is our registry server for storing container images. Here is what I got on trying to login: cormac@pks-cli:~$ sudo docker login -u admin harbor.rainpole.com Password: Error response from…

PKS and NSX-T: Error: Timed out pinging after 600 seconds

I’m still playing with PKS 1.3 and NSX-T 2.3.1 in my lab. One issue that I kept encountering was that when on deploying my Kubernetes cluster, my master and worker nodes kept failing with a “timed out” trying to do a ping. A bosh task command showed the errors, as shown here. cormac@pks-cli:~$ bosh task Using environment ‘192.50.0.140’ as client ‘ops_manager’ Task 845 Task 845 | 16:56:36 | Preparing deployment: Preparing deployment Task 845 | 16:56:37 | Warning: DNS address not available for the link provider instance: pivotal-container-service/0c23ed00-d40a-4bfe-abee-1c Task 845 | 16:56:37 | Warning: DNS address not available for the…

New vRealize Automation, Cloud Assembly integration with SPBM

At VMworld last year, Duncan Epping and I presented on the power of Storage Policy Based Management (SPBM for short). You can find all of the slides and recordings here. One of the demos we used in the presentation was deploying virtual machines via vRealize Automation, and showing how to consume a storage policy on vSAN. This was using a vRealize Automation plugin, and to be honest, it was a little bit challenging to get it to work. And it wasn’t really a VMware plugin per-se, but something developed by our field team. Today, I’m pleased to announce that we…

Reviewing PKS logs and status

After a bit of a sabbatical, I am back to looking PKS (Pivotal Container Service) again. I wanted to look at the new version 1.3, but I had to do a bit of work on my environment to allow me to do this. Primarily, I needed to upgrade my NSX-T environment from version 2.1 to 2.3. I followed this blog post from vmtechie which provides a useful step-by-step guide. Kudos to our VMware NSX-T team as the upgrade worked without a hitch. My next step was to start work on the PKS deployment. I just did a brand new deployment…

Degraded Device Handling (DDH) Revisited

Degraded Device Handling (DDH) or Dying Disk Handling as it was formerly known, is a feature that has been available in vSAN for some time. However, I regularly get questions about how it works. The DDH behavior has changed significantly over various versions. We may as well begin this post with an overview about the purpose of DDH and then get into the different sort of behaviors. First of all, the reason behind a feature such as DDH is to help avoid cluster performance degradation due to an unhealthy drive. In the early days of vSAN, we had come across…

vSAN Erasure Coding Failure Handling

I had a very interesting question recently about how vSAN handles a failure in an object that is running with an erasure coding configuration. In the case of vSAN this is either a RAID-5 or a RAID-6. On vSAN, a RAID-5 is implemented with 3 data segments and 1 parity segment (3+1), with parity striped across all four components. RAID-6 is implemented as 4 data segments and 2 parity segments (4+2), again with the parity striped across all of the six components. Now, on vSAN, RAID-5 requires 4 physical ESXi hosts for implementation, with each host backing one set of…