In my previous exercise with Heptio Velero, I looked at backing up and restoring a Couchbase deployment. This time I turned my attention to another popular containerized application, Cassandra. Cassandra is a NoSQL database, similar in some respects to Couchbase. Once again, I will be deploying Cassandra as a set of containers and persistent volumes from Kubernetes running on top of PKS, the Pivotal Container Service. And again, just like my last exercise, I will be instantiating the Persistent Volumes as virtual disks on top of vSAN. I’ll show you how to get Cassandra up and running quickly by sharing…
Those of you who work in the cloud native space will probably be aware of VMware’s acquisition of Heptio back in December 2018. Heptio bring much expertise and a number of products to the table, one of which I was very eager to try it. This is the Heptio Velero product, previously known as Heptio Ark. Heptio Velero provides a means to back up and restore cloud native applications. Interestingly enough, they appear to be able to capture all of the deployment details, so they are able to backup the pods (compute), persistent volumes (storage) and services (networking), as well…
I’m still playing with PKS 1.3 and NSX-T 2.3.1 in my lab. One issue that I kept encountering was that when on deploying my Kubernetes cluster, my master and worker nodes kept failing with a “timed out” trying to do a ping. A bosh task command showed the errors, as shown here. cormac@pks-cli:~$ bosh task Using environment ‘192.50.0.140’ as client ‘ops_manager’ Task 845 Task 845 | 16:56:36 | Preparing deployment: Preparing deployment Task 845 | 16:56:37 | Warning: DNS address not available for the link provider instance: pivotal-container-service/0c23ed00-d40a-4bfe-abee-1c Task 845 | 16:56:37 | Warning: DNS address not available for the…
At VMworld last year, Duncan Epping and I presented on the power of Storage Policy Based Management (SPBM for short). You can find all of the slides and recordings here. One of the demos we used in the presentation was deploying virtual machines via vRealize Automation, and showing how to consume a storage policy on vSAN. This was using a vRealize Automation plugin, and to be honest, it was a little bit challenging to get it to work. And it wasn’t really a VMware plugin per-se, but something developed by our field team. Today, I’m pleased to announce that we…
Degraded Device Handling (DDH) or Dying Disk Handling as it was formerly known, is a feature that has been available in vSAN for some time. However, I regularly get questions about how it works. The DDH behavior has changed significantly over various versions. We may as well begin this post with an overview about the purpose of DDH and then get into the different sort of behaviors. First of all, the reason behind a feature such as DDH is to help avoid cluster performance degradation due to an unhealthy drive. In the early days of vSAN, we had come across…
I had a very interesting question recently about how vSAN handles a failure in an object that is running with an erasure coding configuration. In the case of vSAN this is either a RAID-5 or a RAID-6. On vSAN, a RAID-5 is implemented with 3 data segments and 1 parity segment (3+1), with parity striped across all four components. RAID-6 is implemented as 4 data segments and 2 parity segments (4+2), again with the parity striped across all of the six components. Now, on vSAN, RAID-5 requires 4 physical ESXi hosts for implementation, with each host backing one set of…
After just deploying the newest version of Pivotal Container Services (PKS) and rolling out my first Kubernetes cluster (read all about it here), I wanted to try to do something a bit more interesting than just create another persistent volume claim to test out our vSphere Cloud Provider since I had done this a number of times already. Thanks to some of the work I have been doing with our cloud native team, I was introduced to StatefulSets. That peaked my interest a little, as I had not come across them before.