vSAN 7.0U1 – What’s new?

VMware has just announced the next release of their Hyper-converged Infrastructure product, vSAN 7.0 Update 1 (U1). In this post, I will cover some of the main big-ticket items that have been included in this release. You’ll notice quite a number of new features and additional functionality, and some of these have been requested for quite some time, so it is fantastic to finally see them in the product.

vSAN File Services now supports the SMB protocol

In vSAN 7.0, we announced support for vSAN File Services. In that release, we supported the creation of NFS volumes that could be presented to NFS clients. In vSAN 7.0U1 we have extended the supported protocols to include SMB (Server Message Block) network protocol alongside NFS. vSAN can now present files shares that support NFS v3 and v4.1, as well as SMB v2.1 and v3.

vSAN File Services now supports Kerberos and Active Directory

One of the obvious use-cases for vSAN file shares is a way to provide users home folders/directories. In order to be able to ensure that users file are protected, and users can only see their own shares, integration with Kerberos authentication for NFS and Active Directory for SMB. This integration is now available in vSAN File Services 7.0U1, so now vSphere administrators have full control over over vSAN file shares permissions.

vSAN File Services Scale Increase

The number of ESX Agent Manager VMs support the vSAN File Services protocol stack has been increased from a maximum of 8 per vSAN cluster to a maximum of 32 per vSAN cluster.

Introducing Disaggregated HCI / HCI Mesh

One of the most common pieces of feedback we have heard over the years with vSAN is the fact that when there is available capacity on one vSAN cluster, it cannot be easily used by another vSAN cluster. Sure – there are ways around that with iSCSI Target and vSAN File Services, but customers wanted a more simplistic approach. In vSAN 7.0U1, we are introducing the first version of a Disaggregated vSAN also known as HCI Mesh. This will allow a local vSAN cluster to mount the vSAN datastore from another (remote) vSAN cluster, and vice-versa. This will mean that if there is “stranded space” on some vSAN datastores, this can now be consumed by a remote vSAN cluster, and it can be used for provisioning of virtual machine objects. There are some requirements around networking which will be made clear in the official docs. There are also some scaling limits as well in this first version. For example, you can only federate a total of 16 vSAN clusters in a HCI Mesh. Within that mesh, and single cluster can mount up to 5 remote vSAN datastores, and similarly, a vSAN datastore can only be mounted by 5 remote vSAN clusters. I’m sure this feature will be a big hit with customers who already have many vSAN clusters.

A new Compression Only Data Service

This is another feature that our customers have been requesting for some time. Prior to this release, the deduplication and compression space efficiency features were combined; you could not enable one without the other. So even workloads that did not benefit from deduplication needed to have it enabled on the vSAN datastore if they wanted the compression feature. This also had some issues for availability, since the deduplication hash table was striped across all of the disks in the disk group. Should a disk failure occur when deduplication was enabled, the failure impacted the whole of the disk group. Having an option to enable compression only in vSAN 7.0U1 is definitely a nice update.

vSAN “Shared” Witness Appliance

For customers who have deployed 2-node vSAN clusters, these customers will be very much aware of the requirement to use a witness appliance. For each 2-node vSAN cluster deployed, an additional witness appliance also needed to be deployed. In vSAN 7.0U1, we now have the ability for these 2-node vSAN clusters to share the same witness appliance. A single vSAN 7.0 witness appliance can now support up to 64 2-node vSAN clusters. Note that the shared witness appliance is only available for 2-node vSAN clusters at this time. It cannot be used for vSAN Stretched Clusters.

vSAN Data Persistence platform

I believe most readers at this point will be well aware of the shift in application development towards a more cloud native approach, typically involving containers and most likely orchestrated by Kubernetes. With this in mind, VMware are continuously enhancing vSAN to be a platform for both container workloads and virtual machine workloads. This latest development is a step towards enabling “cloud native” applications, which are normally “shared-nothing” with their own replication and built-in availability features, to be deployed successfully on vSAN. At the same time, we want to ensure that these applications can run as optimally as possible from a storage perspective. Lastly, these applications will have the built-in smarts to understand what action to take when there is an event occurring on the underlying vSphere infrastructure, e.g. maintenance mode, upgrade, patching, etc.

The vSAN Data Persistence platform (DPp) will deploy partner applications in the Supervisor Cluster of vSphere with Kubernetes. We are currently working with a handful of design partners in the initial release. These partner applications are “shared-nothing” and already have built-in replication/protection features, which means that vSAN does not need to provide any protection at the underlying layer – the storage objects for the application are provisioned with no protection (FTT=0 to use vSAN terminology).

Some reader will be aware that we have provided limited support for Share Nothing Architectures (SNA) in the past, but this meant we had to do various steps such as disable clustering features for the application. In this case, since we are using PodVMs in the Supervisor Cluster, these are already outside the control of vSphere HA and DRS. Thus, it becomes much easier to deploy SNAs on vSAN through PodVM objects. Deploying directly onto the vSAN datastore like this is fully supported with the Data Persistence platform, but there is another option available as well.

To facilitate a high performance data path for these application, the Data Persistence platform also introduces a new construct for storage called vSAN-Direct. vSAN-Direct allows applications to consume the local storage devices on a vSAN host directly. However these local storage devices are still under the control of HCI management, so that health, usage and other pertinent information about the device is bubbled up to the vSphere client. The primary goal here is to allow cloud native applications to be seamlessly deployed onto vSAN, but at the same time have those applications understand infrastructure operations such as maintenance mode, upgrades and indeed host failures. As mentioned, we have partnered with a number of cloud native application vendors who will create bespoke Kubernetes operators that will work with the Data Persistence platform. Partners can then define how their application should behave (e.g. re-shard, evacuate, delete and reschedule Pods, etc.) when a vSphere operation is detected.

I will write more about Data Persistence platform as our design partners start to come online. To learn more about it, check out VMworld 2020 session #HCI2529.

Summary

As you can see, there is lots of new goodness in the vSAN 7.0U1 release. There are lots of features here that customers have been requesting for some time, but also significant improvements in enabling vSAN to become that platform for both container and virtual machine workloads. Note that there are a range of additional features and enhancements in this release which I have not spoken about. Please check out the official vSAN 7.0U1 documentation for a complete and comprehensive list of updates.

16 Replies to “vSAN 7.0U1 – What’s new?”

    1. Yes – I should have mentioned it in the post but the vSAN iSCSI Target (VIT) now is aware of stretched cluster configurations, so the “I/O owner” is now placed so that iSCSI traffic does not cross sites.

      There is still some work to be done for vSAN File Services, so we do not support that on vSAN stretched cluster just yet.

  1. Hello. I want to ask File Services with ROBO license.
    You know ROBO has 25 VM limit. But fileservices can’t consume any VM license. So in ROBO license what is the fileservices limit?

  2. 2 questions.
    1- in theory I could put vsphere std and vsan enterprise to enable File Services? ( i know I lose other features like DRS with std)
    2- With SMB support, I don’t see any articles or posts about antivirus/malware. What is the plan around that?

    Thank you,
    -GB

    1. No – I believe the VCF ROBO solution requires 3-4 nodes at present. It would be interesting to know if you have a use-case for 2-node though?

  3. Hi Cormac,

    any idea why I don’t see the Active Directory checkbox in the files services setup wizard on a 7U1 cluster?
    vCenter is joined to AD, do I have to do any additional steps?
    In our VCF 4.1 lab environment, I have the AD checkbox.

    If you dont have an idea, I will open a SR.

    BR Johannes

    1. Hi Johannes – no, I don’t know why it would be disabled. I have just tried it on my system (7.0U1) and it is available. Are you running 7.0U1 across all hosts and vCenter server, as Directory Service integration is in 7.0U1 only?

  4. Do VMware plan to better support single disk expansions? more like how Nutanix does it? diskgroups is a bit tricky sometimes.

    1. Hi Ketil,

      We have a number of plans around vSAN going forward, but nothing I can share publicly. Reach out to your local VMware rep if you are interested in hearing about the roadmap, but it would have to be under an NDA (Non Disclosure Agreement).

  5. Hello,

    is it possible to estimate, when vSAN File Services will be usable with Stretched cluster ?
    Thanks for answer in advance.

    Eduard

    1. Hi Eduard – I’m afraid we can’t talk about futures at this point in time. If you speak to your local VMware rep, they may be able to share a roadmap with you under NDA.

      Cormac

  6. Hello Cormac,

    With VSAN Direct , does the service container size gets limited to the capacity available on a single host ? And with the SMB file services , I am assuming there’s no way for agent based file level backups like in traditional server OS based systems .

    1. Correct – although you can build a Storage Pool of many devices, and consume from the pool. I’ll do a more thorough write-up on vSAN-Direct once we have the some partner announcements for Data Persistent platform.

      And no – there is no direct agent based method. The recommendation at this time is to back it up from a client which has the share mounted.

Comments are closed.