vSAN File Services now supports the SMB protocol
In vSAN 7.0, we announced support for vSAN File Services. In that release, we supported the creation of NFS volumes that could be presented to NFS clients. In vSAN 7.0U1 we have extended the supported protocols to include SMB (Server Message Block) network protocol alongside NFS. vSAN can now present files shares that support NFS v3 and v4.1, as well as SMB v2.1 and v3.
vSAN File Services now supports Kerberos and Active Directory
One of the obvious use-cases for vSAN file shares is a way to provide users home folders/directories. In order to be able to ensure that users file are protected, and users can only see their own shares, integration with Kerberos authentication for NFS and Active Directory for SMB. This integration is now available in vSAN File Services 7.0U1, so now vSphere administrators have full control over over vSAN file shares permissions.
vSAN File Services Scale Increase
The number of ESX Agent Manager VMs support the vSAN File Services protocol stack has been increased from a maximum of 8 per vSAN cluster to a maximum of 32 per vSAN cluster.
Introducing Disaggregated HCI / HCI Mesh
One of the most common pieces of feedback we have heard over the years with vSAN is the fact that when there is available capacity on one vSAN cluster, it cannot be easily used by another vSAN cluster. Sure – there are ways around that with iSCSI Target and vSAN File Services, but customers wanted a more simplistic approach. In vSAN 7.0U1, we are introducing the first version of a Disaggregated vSAN also known as HCI Mesh. This will allow a local vSAN cluster to mount the vSAN datastore from another (remote) vSAN cluster, and vice-versa. This will mean that if there is “stranded space” on some vSAN datastores, this can now be consumed by a remote vSAN cluster, and it can be used for provisioning of virtual machine objects. There are some requirements around networking which will be made clear in the official docs. There are also some scaling limits as well in this first version. For example, you can only federate a total of 16 vSAN clusters in a HCI Mesh. Within that mesh, and single cluster can mount up to 5 remote vSAN datastores, and similarly, a vSAN datastore can only be mounted by 5 remote vSAN clusters. I’m sure this feature will be a big hit with customers who already have many vSAN clusters.
A new Compression Only Data Service
This is another feature that our customers have been requesting for some time. Prior to this release, the deduplication and compression space efficiency features were combined; you could not enable one without the other. So even workloads that did not benefit from deduplication needed to have it enabled on the vSAN datastore if they wanted the compression feature. This also had some issues for availability, since the deduplication hash table was striped across all of the disks in the disk group. Should a disk failure occur when deduplication was enabled, the failure impacted the whole of the disk group. Having an option to enable compression only in vSAN 7.0U1 is definitely a nice update.
vSAN “Shared” Witness Appliance
For customers who have deployed 2-node vSAN clusters, these customers will be very much aware of the requirement to use a witness appliance. For each 2-node vSAN cluster deployed, an additional witness appliance also needed to be deployed. In vSAN 7.0U1, we now have the ability for these 2-node vSAN clusters to share the same witness appliance. A single vSAN 7.0 witness appliance can now support up to 64 2-node vSAN clusters. Note that the shared witness appliance is only available for 2-node vSAN clusters at this time. It cannot be used for vSAN Stretched Clusters.
vSAN Data Persistence platform
I believe most readers at this point will be well aware of the shift in application development towards a more cloud native approach, typically involving containers and most likely orchestrated by Kubernetes. With this in mind, VMware are continuously enhancing vSAN to be a platform for both container workloads and virtual machine workloads. This latest development is a step towards enabling “cloud native” applications, which are normally “shared-nothing” with their own replication and built-in availability features, to be deployed successfully on vSAN. At the same time, we want to ensure that these applications can run as optimally as possible from a storage perspective. Lastly, these applications will have the built-in smarts to understand what action to take when there is an event occurring on the underlying vSphere infrastructure, e.g. maintenance mode, upgrade, patching, etc.
The vSAN Data Persistence platform (DPp) will deploy partner applications in the Supervisor Cluster of vSphere with Kubernetes. We are currently working with a handful of design partners in the initial release. These partner applications are “shared-nothing” and already have built-in replication/protection features, which means that vSAN does not need to provide any protection at the underlying layer – the storage objects for the application are provisioned with no protection (FTT=0 to use vSAN terminology).
Some reader will be aware that we have provided limited support for Share Nothing Architectures (SNA) in the past, but this meant we had to do various steps such as disable clustering features for the application. In this case, since we are using PodVMs in the Supervisor Cluster, these are already outside the control of vSphere HA and DRS. Thus, it becomes much easier to deploy SNAs on vSAN through PodVM objects. Deploying directly onto the vSAN datastore like this is fully supported with the Data Persistence platform, but there is another option available as well.
To facilitate a high performance data path for these application, the Data Persistence platform also introduces a new construct for storage called vSAN-Direct. vSAN-Direct allows applications to consume the local storage devices on a vSAN host directly. However these local storage devices are still under the control of HCI management, so that health, usage and other pertinent information about the device is bubbled up to the vSphere client. The primary goal here is to allow cloud native applications to be seamlessly deployed onto vSAN, but at the same time have those applications understand infrastructure operations such as maintenance mode, upgrades and indeed host failures. As mentioned, we have partnered with a number of cloud native application vendors who will create bespoke Kubernetes operators that will work with the Data Persistence platform. Partners can then define how their application should behave (e.g. re-shard, evacuate, delete and reschedule Pods, etc.) when a vSphere operation is detected.
I will write more about Data Persistence platform as our design partners start to come online. To learn more about it, check out VMworld 2020 session #HCI2529.
Summary
As you can see, there is lots of new goodness in the vSAN 7.0U1 release. There are lots of features here that customers have been requesting for some time, but also significant improvements in enabling vSAN to become that platform for both container and virtual machine workloads. Note that there are a range of additional features and enhancements in this release which I have not spoken about. Please check out the official vSAN 7.0U1 documentation for a complete and comprehensive list of updates.