Minio S3 object store deployed as a set of VMs on vSAN

Some time back, I looked at what it would take to run a container based Minio S3 object store on top of vSAN. This involved using our vSphere Docker Volume Server (aka Project Hatchway, and the details can be found here. However, I wanted to evaluate what it would take to scale out the Minio S3 object store on top of vSAN, paying particular attention to features like distribution and availability, and to examine the various data services that can be provided by both vSAN and Minio. I also wanted to take advantage of the new host-pinning feature in vSAN…

What’s new in vSAN 6.6?

vSAN 6.6 is finally here. This sixth iteration of vSAN is the quite a significant release for many reasons, as you will read about shortly. In my opinion, this may be the vSAN release with the most amount of new features. Let’s cut straight to the chase and highlight all the features of this next version of vSAN. There is a lot to tell you about. Now might be a good time to grab yourself a cup of coffee.

Debunking some behavior “myths” in 3 node vSAN cluster

I recently noticed a blog post describing some very strange behaviors in 2-node and 3-node vSAN clusters. I was especially concerned to read that when they introduced a failure and then fixed that failure, they did not experience any auto-recovery. I have reached out to the authors of the post, just to check out some things such as version of vSAN, type of failure, etc. Unfortunately I haven’t had a response as yet, but I did feel compelled to put the record straight. In the following post, I am going to introduce a variety of operations and failures in my…

Check out the new VSAN 6.2 Hands-On-Lab

HOL-SDC-1608, our VSAN hands-on-lab, has been updated for VSAN version 6.2. This lab contains a bunch of new VSAN 6.2 features including erasure coding (RAID-5/6), checksum, sparse swap and dedupe/compression. You can also see the new health check views, performance metric views and capacity views. Also included is a workflow that will guide you through configuring VSAN stretched cluster and remote-office/branch-office (ROBO) implementations, and how these features work with HA to restart VMs in the event of a failure. The whole lab is modularised, so you can simply look at the features that interest you. You can get access via…

DRS and VM/Host Affinity Groups in VSAN Stretched Cluster

In a previous post, I talked about how vSphere HA is used extensively in VSAN Stretched Cluster. The primary purpose of vSphere HA is to restart virtual machines in the event of a failure. However to ensure that the restarted virtual machines continue to perform optimally, and to continue using a warmed cache, I mentioned that we need to use VM/Host affinity rules to achieve this. In this post I want to discuss the role of DRS and VM/Host affinity rules in more detail, and how they are used in VSAN stretched cluster.

Read locality in VSAN stretched cluster

Many regular readers will know that we do not do read locality in Virtual SAN. For VSAN, it has always been a trade-off of networking vs. storage latency. Let me give you an example. When we deploy a virtual machine with multiple objects (e.g. VMDK), and this VMDK is mirrored across two disks on two different hosts, we read in a round-robin fashion from both copies based on the block offset. Similarly, as the number of failures to tolerate is increased, resulting in additional mirror copies, we continue to read in a round-robin fashion from each copy, again based on…

vSphere 6.0 HA and Component Protection with vMSC

I had a query recently about changes to vSphere 6.0, especially when it comes to vSphere HA and Component Protection (VMCP) with vMSC, vSphere Metro Storage Cluster. The question is very straight forward – do all the same advanced setting recommendations for PDL and APD apply to vMSC on vSphere 6.0 as they did for vSphere 5.5? Or do we have some new recommendations now around PDL and APD for vMSC with the introduction of VMCP in vSphere 6.0?