Some time back, I looked at what it would take to run a container based Minio S3 object store on top of vSAN. This involved using our vSphere Docker Volume Server (aka Project Hatchway, and the details can be found here. However, I wanted to evaluate what it would take to scale out the Minio S3 object store on top of vSAN, paying particular attention to features like distribution and availability, and to examine the various data services that can be provided by both vSAN and Minio. I also wanted to take advantage of the new host-pinning feature in vSAN…
After receiving a number of queries about vSphere Fault Tolerance on vSAN over the past couple of weeks, I decided to take a closer look at how Fault Tolerant VMs behave with different vSAN policies. I wanted to take a look at two different policies. The first is when the “failures to tolerate” (commonly referred to as FTT) is set to 0, and the other is when the “failures to tolerate” is set to 1. The question is whether or not we could deploy VMs without any vSAN protection and allow Fault Tolerant VMs to protect them instead.
If you’ve been following along my recent blog posts, you’ll have seen that I have been spending some time ramping up on NSX-T and Pivotal Container Services (PKS). My long term goal was to see how these two products integrate together and to figure out the various moving parts. As I was very unfamiliar with both products, I took a piece-meal approach to both. First, I tried to get some familiarity with NSX-T. You can find my previous posts on NSX-T here: Building a simple ESXi host overlay network with NSX-T First steps with NSX-T Edge – DHCP Server Next…
If you’ve been following along on my NSX-T adventures, you’ll be aware that at this point we have our overlay network deployed, and our NSX-T edge has been setup to with DHCP servers attached to my logical switch, which in turn provides IP addresses to my virtual machines. This is all fine and well, but I’d also like these VMs to reach the outside world. NSX-T enables this through a feature called logical routers. In this post, I will talk you through how to configure a tier 0 logical router which connects to the outside world, a tier 1 logical router…
Hot on the heels of the vSAN 6.7 release, a new performance checklist for vSAN benchmarking has now been published on our StorageHub site. This is the result of a project that I started a few months back with my colleague, Paudie O’Riordan. It builds upon a huge amount of groundwork that was already done in this area by Andreas Scherr, one of our Senior Solutions Architects here in EMEA. The aim of this checklist is to get the best possible performance out of your vSAN deployment, typically during the Proof Of Concept (PoC) stage. We’ve had many situations where…
This week I attended KubeCon and CloudNativeCon 2018 in Copenhagen. I had two primary goals during this visit: (a) find out what was happening with storage in the world of Kubernetes (K8s), and (b) look at how people were doing day 2 operations, monitoring, logging, etc, as well as the challenges one might encounter running K8s in production. Let’s start with what is happening in storage. The first storage related session I went to was on Rook. This was a presentation by Jared Watts. According to Jared, the issues that Rook is trying to solve are to avoid vendor lock-in…
This post will walk you through a simplified PKS (Pivotal Container Service) deployment in my lab. The reason why I say this is simplified is because all of the components will be deployed on a single flat network. PKS has a number of network dependencies. These include the bosh agents deployed on the Kubernetes (K8s) VMs being able to reach the BOSH Director, as well as the vCenter server. Let’s not get too deep into the components just yet – these will be explained over the course of the post. So rather than trying to set up routing between multiple…