I spent the last 10 days in the VMware HQ in Palo Alto, and had lots of really interesting conversations and meet-ups, as you might imagine. One of those conversations revolved around the minimum VSAN configurations. Let’s start with the basics.
2-node: There are two physical hosts for data and a witness appliance hosted elsewhere. Data is placed on the physical hosts, and the witness appliance holds the witness components only, never any data.
3-node: There are three physical hosts, and the data and witness components are distributed across all hosts. This configuration can support a number of failures to tolerate = 1 with RAID-1 configurations.
4-nodes: There are four physical hosts, and the data and witness components are distributed across all hosts. This configuration can support a number of failures to tolerate (FTT) = 1 with RAID-1 and RAID-5 configurations. This configuration also allows VSAN to self-heal in the event of a failures, when RAID-1 is used.
Last week I had the opportunity to drop down to San Jose and catch up with our friends on the FlashSoft team at SanDisk. In case you were not aware, this team has been developing a cache acceleration I/O filter as part of the VAIO program (VAIO is short for vSphere APIs for I/O Filters). SanDisk were also one of the design partners chosen by VMware for VAIO. This program allows for our partners to plug directly into the VM I/O path, and add third-party data services, such as replication, encryption, quality of service and so on. An interesting observation made by the FlashSoft team is that implementing their acceleration data service via VAIO gives much greater performance than their previous product version which plugs into the Pluggable Storage Architecture (PSA) stack.
The guys were also kind enough to give me access to the components and some evaluation licenses, so I could test it out in my own labs. The documentation is pretty good so I won’t go into too much detail. However these are the steps to get going:
Before I get into this post, I do want to highlight that you probably will not do this in any production type environment. The reason why I implemented this, and how this post came about, is because I was helping out with our new edition of the VSAN 6.2. Hands-On-Lab (which should be available imminently by the way). Part of the lab involved demonstrating checksum functionality. Since VSAN has a distributed architecture, there was a requirement to run commands on different hosts. Rather than having lab participants input the password each and every time to run a command on the remote hosts, which becomes tedious very quickly, we decided to try to implement a public/private key pair, creating a trust between the ESXi 6.0U2 hosts and avoid having to input the password to run commands remotely. This proves a little problematic on ESXi hosts as not every file on ESXi is persisted on reboot. The following are the steps we implemented to allow us to do this in the lab.
There has been a lot of news recently about the availability of vSphere Integrated Containers (VIC) v0.1 on Gtihub. VMware have being doing a lot of work around containers, container management and the whole area of cloud native applications over the last while. While many of these projects cannot be discussed publicly there are two projects that I am going to look at here :
Photon OS – a minimal Linux container host designed to boot extremely quickly on VMware platforms.
vSphere Integrated Containers – a way to deploy containers on vSphere. This allows developers to create applications using containers, but have the vSphere administrator manage the required resources needed for these containers.
As I said, this is by no means the limit of the work that is going on. Possibly the best write-up I have seen discussing the various work in progress in this one here on the Next Platform site.
I will admit that I’m not that well versed in containers or docker, but I will say that I found Nigel Poulton’s Docker Deep Dive on PluralSight very informative. If you need a primer on containers, I would highly recommend watching this.
So what I am going to this in this post? In this post, I will walk through the deployment of the Photon OS, and then deploy VIC afterwards. You can then see for yourself how containers can be deployed on vSphere, and perhaps managed by a vSphere administrator while the developer just worries about creating the app, and doesn’t have to worry about the underlying infrastructure.
It has been some time since I last looked at Horizon View on Virtual SAN. The last time was when we first released VSAN, back in the 5.5 days. This was with Horizon View 5.3.1, which was the first release that inter-operated with Virtual SAN. At the time, there was some funkiness with policies. View could only use the default policy at the time, and the default policy used to show up as “none” in the UI. The other issue is that you could not change the default policy via the UI, only through CLI commands. Thankfully, things have come a long way since then. In this post, I will look at how Horizon View 7 inter-operates with Virtual SAN 6.2, concentrating mostly on policies. However, Horizon View 7 also has new vmFork/Instant Clone technology and AppVolumes, and I hope to be able to do some posts on those features running on top of VSAN going forward.
I’ve already written a few articles around this, notably on stretched cluster upgrades and on-disk format issues. In this post, I just wanted to run through the 3 distinct upgrade steps in a little more detail, and show you some useful commands that you can use to monitor the progress. In a nutshell, the steps are:
Upgrade vCenter Server to 6.0U2 (VSAN 6.2)
Upgrade ESXi hosts to ESXi 6.0U2 (VSAN 6.2)
Perform rolling upgrade of on-disk format from V2 to V3 across all hosts
A number of customers have reported experiencing difficulty when attempting to upgrade the on-disk format on VSAN 6.2. The upgrade to vSphere 6.0u2 goes absolutely fine; it is only when they try to upgrade the on-disk format, to use new features such as Software Checksum, and Deduplication and Compression, that they encounter this error. Here is a sample screenshot of the sort of error that is thrown by VSAN:
One thing I do wish to call out – administrators must use the VSAN UI to upgrade the on-disk format. Do not simply evacuate a disk group, remove it and recreate it. This will cause a mismatch between previous disk group versions (v2) and the new disk group versions that you just created (V3). Use the UI for the on-disk format upgrade.