Expanding on VSAN 2-node, 3-node and 4-node configuration considerations

I spent the last 10 days in the VMware HQ in Palo Alto, and had lots of really interesting conversations and meet-ups, as you might imagine. One of those conversations revolved around the minimum VSAN configurations. Let’s start with the basics.

  • 2-node: There are two physical hosts for data and a witness appliance hosted elsewhere. Data is placed on the physical hosts, and the witness appliance holds the witness components only, never any data.
  • 3-node: There are three physical hosts, and the data and witness components are distributed across all hosts. This configuration can support a number of failures to tolerate = 1 with RAID-1 configurations.
  • 4-nodes: There are four physical hosts, and the data and witness components are distributed across all hosts. This configuration can support a number of failures to tolerate (FTT) = 1 with RAID-1 and RAID-5 configurations. This configuration also allows VSAN to self-heal in the event of a failures, when RAID-1 is used.

Continue reading

FlashSoft I/O Filter VAIO Setup Steps

sandiskLast week I had the opportunity to drop down to San Jose and catch up with our friends on the FlashSoft team at SanDisk. In case you were not aware, this team has been developing a cache acceleration I/O filter as part of the VAIO program (VAIO is short for vSphere APIs for I/O Filters). SanDisk were also one of the design partners chosen by VMware for VAIO. This program allows for our partners to plug directly into the VM I/O path, and add third-party data services, such as replication, encryption, quality of service and so on. An interesting observation made by the FlashSoft team is that implementing their acceleration data service via VAIO gives much greater performance than their previous product version which plugs into the Pluggable Storage Architecture (PSA) stack.

Manish, Rich, Serge and Tom gave us another update on the 4.0 version of the FlashSoft product. I had seen this before, as the guys tech previewed it at VMworld 2015 last year. However with the release of FlashSoft 4.0, the guys now have the first certified VAIO I/O Filter on the market. SanDisk are our first certified partner for VAIO. Rich Petersen has a good write-up on the capabilities here on the SanDisk blog.

The guys were also kind enough to give me access to the components and some evaluation licenses, so I could test it out in my own labs. The documentation is pretty good so I won’t go into too much detail. However these are the steps to get going:

Continue reading

Check out the new VSAN 6.2 Hands-On-Lab

holHOL-SDC-1608, our VSAN hands-on-lab, has been updated for VSAN version 6.2.

This lab contains a bunch of new VSAN 6.2 features including erasure coding (RAID-5/6), checksum, sparse swap and dedupe/compression. You can also see the new health check views, performance metric views and capacity views.

Also included is a workflow that will guide you through configuring VSAN stretched cluster and remote-office/branch-office (ROBO) implementations, and how these features work with HA to restart VMs in the event of a failure.

The whole lab is modularised, so you can simply look at the features that interest you.

You can get access via the hands-on-lab portal – http://labs.hol.vmware.com/HOL/catalogs/catalog/123

Recovering from a full VSAN datastore scenario

We had an interesting event happen on one of our lab servers this weekend. One of the hosts in our four node cluster hit an issue, which meant that the storage on that host was no longer available to the VSAN datastore. Since VSAN auto-heals, it attempted to re-protect as many VMs as possible. However, since we chose to ignore one of the health check warnings to do with limits, we ended up with a full VSAN datastore.

Continue reading

How to SSH between ESXi 6.0U2 hosts without providing a password

lockBefore I get into this post, I do want to highlight that you probably will not do this in any production type environment. The reason why I implemented this, and how this post came about, is because I was helping out with our new edition of the VSAN 6.2. Hands-On-Lab (which should be available imminently by the way). Part of the lab involved demonstrating checksum functionality. Since VSAN has a distributed architecture, there was a requirement to run commands on different hosts. Rather than having lab participants input the password each and every time to run a command on the remote hosts, which becomes tedious very quickly, we decided to try to implement a public/private key pair, creating a trust between the ESXi 6.0U2 hosts and avoid having to input the password to run commands remotely. This proves a little problematic on ESXi hosts as not every file on ESXi is persisted on reboot. The following are the steps we implemented to allow us to do this in the lab.

Continue reading

A primer on App Volumes and AppStacks on VSAN

appvol-logoLast week I wrote a post on Horizon View 7 on VSAN. That was all about showing the policies that were associated with the different desktops that can be deployed. I did mention that while one could use vmFork/Instant Clones for desktops, they did not include any sort of persistence. I did add a caveat to say that you could provide persistent storage to these desktops using App Volumes. In this post, I wanted to give some details on App Volumes, and the different moving components one will need if they want to deploy View desktops with App Volumes. However, I am only going to show the considerations around deploying AppStacks which is an application that is presented to the desktop via a read-only VMDK. I do not get into the specifics around writable volumes in this post, but this is also available via AppStacks. Hopefully this will be a good enough primer to get you started with App Volumes. I will say at the beginning that there is nothing unique about using VSAN in this case, other than the fact that policies can be used for the VMDKs. As you will see shortly, both the management cluster and production cluster both run VSAN, and I could use App Volumes and AppStacks with no issues.

Continue reading

Getting started with Photon OS and vSphere Integrated Containers

PHOTON_square140There has been a lot of news recently about the availability of vSphere Integrated Containers (VIC) v0.1 on Gtihub. VMware have being doing a lot of work around containers, container management and the whole area of cloud native applications over the last while. While many of these projects cannot be discussed publicly there are two projects that I am going to look at here :

  • Photon OS – a minimal Linux container host designed to boot extremely quickly on VMware platforms.
  • vSphere Integrated Containers – a way to deploy containers on vSphere. This allows developers to create applications using containers, but have the vSphere administrator manage the required resources needed for these containers.

As I said, this is by no means the limit of the work that is going on. Possibly the best write-up I have seen discussing the various work in progress in this one here on the Next Platform site.

I will admit that I’m not that well versed in containers or docker, but I will say that I found Nigel Poulton’s Docker Deep Dive on PluralSight very informative. If you need a primer on containers, I would highly recommend watching this.

So what I am going to this in this post? In this post, I will walk through the deployment of the Photon OS, and then deploy VIC afterwards. You can then see for yourself how containers can be deployed on vSphere, and perhaps managed by a vSphere administrator while the developer just worries about creating the app, and doesn’t have to worry about the underlying infrastructure.

Continue reading