What’s new in vSAN 6.7U1

Yesterday, we saw the announcement that VMware has release vSphere 6.7U1. This includes new releases of vCenter Server 6.7U1, ESXi 6.7U1 and of course vSAN 6.7U1. All of the hyperlinks here will take you to the release notes of that particular product. In this post, I just want to briefly run through some of the major enhancements that we have included in vSAN 6.7U1.

TRIM/UNMAP Support

Top of the list for me is the introduction of automated UNMAP support. Note that the key point here is that this is for in-guest space reclamation. vSAN has never had an issue with space reclamation at the volume/datastore level. In many respects, it was like NFS in so far as just like NFS could track all of its files, vSAN could track all of its objects. Should an object be removed, vSAN could immediately reclaim/re-use that space. So this focus is purely on in-guest reclaim. If a VMDK has been growing over a period of time, and so housekeeping was done in the guest to clean up unused files, then UNMAP should now kick in and shrink the VMDK in size, allowing the space that this frees up on the vSAN datastore to be reused by vSAN for other purposes. I’m pretty sure that anyone running Virtual Desktops (VDI) on vSAN will welcome this enhancement. Our tech marketing team has created a pretty decent write-up on how to enable and monitor TRIM/UNMAP operations , so have a read of this before beginning any space reclamation operations.

Further VUM Integration

Many readers will be aware that we have been making steady progress on making vSphere Update Manager (VUM) more and more vSAN aware. Today, when you wish to upgrade a vSAN environment, VUM will take care of this for you by doing a rolling upgrade of each of the hosts in the cluster, placing them into maintenance mode, updating them, rebooting them and then taking that host back out of maintenance mode. 6.7U1 builds on this by including firmware level updates of the storage controllers into the VUM workflows. Initially this will be limited to only a subset of storage controllers on the compatibility list, but the objective will be to grow this as much as possible so that all updates of your vSAN hyper-converged infrastructure will be orchestrated via VUM.

Maintenance Mode Enhancements

This is another area where more and more safeguards have been added over the last number of releases. In vSAN 6.6, we did a lot of work to the decommissioning and maintenance mode workflows whereby we highlighted any capacity constraints or object unavailability depending on which options was chosen for data evacuation. In 6.7U1 we continue to build on these safeguards, and vSAN will now attempt to simulate the result of a host being placed into maintenance mode. If the result of the simulation means that there is not enough space left on the cluster to re-protect/rebuild all of the objects (if this is the option that has been requested by the administrator), then the maintenance mode operation will fail immediately, and the administrator is given the reason why. This is different to previous versions whereby vSAN did a ‘best-efforts’ to re-protect as many objects as it could.

A few additional enhancements in this area including having the maintenance mode workflow check for other hosts in maintenance mode, checking to see if there is any rebuild/resync activity take place on the cluster and whether or not any other hosts in the vSAN cluster are concurrently being placed into maintenance mode. Finally, should you need a longer wait time for this maintenance activity before objects start to rebuild (longer than the default 60 minutes CLOM repair delay), this can now be adjusted globally in the vSphere client rather than on an individual ESXi host basis. This can be done via the Advanced settings dialog box under Configure > vSAN > Services > Advanced Options.

New Proactive Network Performance Test

The proactive tests have changed significantly over the last number of releases. Gone are the Multicast performance test (since we moved to unicast for network communication in 6.6) and also gone are the Storage Performance tests (in favor of HCIbench). In 6.7U1, a new test has been added – the network performance test. It is basically the reintroduction of the original performance test, but now it is designed for unicast traffic rather than multicast traffic.

The “Network Performance Test” simply verifies that the network infrastructure can handle a particular throughput of network traffic, and highlights if the network is unable to carry a particular load that is desirable for vSAN. This is especially important when there is a complex network configuration that may involve a number of hops or routes when vSAN is deployed over L3.

The “Network Performance Test” offers the option of including network diagnostics. These diagnostics can be useful in determining whether there is sufficient bandwidth between all of the ESXi hosts to support vSAN. The test checks to make sure there is at least 850Mbps between the hosts.

Historical Data Reporting

The historical data reporting is an enhancement that I am really happy to have included. It might seem like quite a small feature to include, but it has wide ranging benefits. This is the ability to look back at historical capacity usage which could be very useful in forecasting future consumption, and whether the administrator should be considering adding more capacity to the vSAN cluster. Not only that, but you can select a policy to use against the remaining capacity to see how much actual capacity is available when the overhead of VM protection is taken away, be it RAID-1, RAID-5 or RAID-6. This is also a very useful addition.

Quickstart Workflow

This new quickstart workflow aims to get your vSAN cluster deployed very quickly (should be obvious from the name I guess). It does this by providing guidance every step of the way. It asks the administrator to select which hosts to add to the cluster, checks their health status and configuration, provides the administrator with an overview of steps that they still need to take such as configuring the vSAN network. The end result is that administrators get what is literally a hand-held approach to deploying a vSAN cluster.

Not only that, but if there is a need to scale out the cluster as some point in the future, quickstart allows a user to add additional hosts to the cluster, validates the new host, offers guidance on what additional steps might be needed and then ensures that only a validated and working, healthy host is added. This is a really cool feature for new greenfield vSAN deployments.

Summary

In closing, I think it is clear once again that there are a huge number of additional features added to vSAN once again in this 6.7U1 release. I haven’t listed everything, but there are new enhancements to health checks, support diagnostics, vRealize Operations integration and others. If you want the full set of features, please check out the release notes linked in the opening part of this post. I’d also recommend popping over to this blog article from my colleague Jase who covers all of the new features and enhancements in vSAN 6.7U1.

6 Replies to “What’s new in vSAN 6.7U1”

  1. Hi, thanks Cormac- great write up. One question we have is if TRIM/UNMAP is supported by LUNS presented to guests as vSAN iSCSI Targets. As you know, vSAN iSCSI target vmdks can’t be expanded on the fly, so we’d kind of want to over provision the LUNS to mitigate this. We don’t then want to find that the over provisioned storage would be irretrievably eroded either by users erroneously consuming space or by filesystem block creep.Thanks again!

  2. On a four-node 6.7U1 cluster with dual 25GbE NICs the Proactive Network Performance Test shows only around 9600 Mb/s Received Bandwidth. Why? Is that some internal limitation? How exactly this test is working? The network was verified to provide full 25Gb/s throughput between all hosts using iperf.

    1. I suspect that it may be limited to 10GbE. Worth asking support, and having them raise this with engineering to resolve it in future releases if that is indeed the case.

Comments are closed.