What’s new in Virtual SAN 6.5

I know that there will be a lot of information coming your way from various sources on this exact topic. Obviously, I would urge you to check out the latest and greatest documentation from our technical marketing guys for deeper detail and “how-to” guides. However, I did want to provide a brief overview of what new VSAN features are available in vSphere 6.5. Note that we also refer to this version of VSAN as 6.5.

1. Licensing Changes

The first thing that I wanted to highlight are some significant changes to the way VSAN is licensed. Licensing has been relaxed so that you can now deploy an All-Flash VSAN cluster with the VSAN standard license. Note however that you will not have the data services such as deduplication and compression available on this standard license. You will still need the higher license edition to use these data services.

2. iSCSI Support

VSAN 6.5 now has new functionality to create iSCSI targets and LUNs on a VSAN cluster, and present these LUNs outside of VSAN for other use cases. This could be a useful feature if you have excessive capacity on your VSAN cluster and wish to have it consumed outside of the cluster. Be aware that there are a number of limits and supportability considerations with regards to how you can utilize this new feature. For example, you will not be able to present these iSCSI LUNs to ESXi hosts. I strongly urge you to refer to the official documentation, and check out what you can, and more importantly, what you cannot do with these iSCSI LUNs before putting them into production.

3. Direct Connect and separate witness traffic on 2-node deployments

This is a very interesting improvement for those of you interested in deploying 2-node VSAN in either a remote office/branch office (ROBO) type scenario, or indeed for an small-midsize business (SMB) use case. VMware now supports having the 2 data nodes in this deployment connected via direct connect network cables, removing the need for a physical switch between the data nodes. Included with this enhancement is a mechanism for decoupling the VSAN witness traffic from the data traffic. This means that the VSAN data traffic can be left on the direct connect network, and witness traffic can be sent via another VMkernel interface to the witness node/appliance. Again, there is a lot of how-to documentation and deeper detail on how to deploy this new configuration coming from our technical marketing team. But this should make it a lot less expensive to deploy a 2-node VSAN configuration.

4. 512e Device Support

This is something that a number of customers have been asking for. While there is still no support for the 4K native devices, support for these 512e (emulation) devices will allow VSAN to use much larger capacity devices going forward.

5. PowerCLI cmdlets for VSAN

Something else that a lot of customers have been asking for is PowerCLI cmdlets to allow scripting/automation of various VSAN tasks. Well, with the new version of PowerCLI that is coming soon, you will notice a bunch of new PowerCLI cmdlets available for VSAN which will allow you to do just that. My understanding is that these cmdlets will also be backward compatible with previous versions of VSAN too.

Some nice new features I’m sure you will agree.

24 Replies to “What’s new in Virtual SAN 6.5”

    1. This was mentioned in the keynote at VMworld EMEA day #2. Sign up on the upcoming beta for the next release of VSAN to learn more.

  1. Great article, VSAN 6.5 is the 5th generation of VSAN and it is really a great that now VMware VSAN supports 2-Nodes Direct-Connect. This will be a very helpful for smaller customers or for customers with ROBO environments . These deployments will become a little bit cheaper. The good news is that all-flash hardware supports now available with VSAN Standard. New iSCSI support and 512e Device Support within VSAN is also a new feature of VSAN .
    Thanks for sharing . The way you explained each and everything is really great.Thanks once again .

  2. With the iSCSI from vSAN. Can we use the iSCSI target at OS Level to a VM sitting on the vSAN? Can we use an iSCSI Target and use for RDM to a VM on the vSAN for MSCS?

    1. The use-cases are still under consideration. We will have to wait until GA (or at least closer) to GA to know all of the supported use cases. Support for MSCS is certainly part of the discussions.

  3. No chance to see Multi-VSAN datastore in one cluster? I think that many organizations now have the need to have 2 tier storage (SSD for high IO VMs, Hybrid for High Capacity). It may help even in migrations from hybrid to all-flash on large setup.

  4. Hi Cormac,

    According to VMware Documentation, the default datastore is VMFS 5 if the device is 512n format. While I installed the vSphere 6.5 ESXi in a SATADOM device, I found that the default datastore is VMFS5. I am confused about this scenario, this seems to tell that the SATADOM is a 512n device, but how to identify the device format? Is there any method through ESXi that can show the device format?

    Thanks in advance.

      1. According to “Prior to version 6.5, direct attached drives which expose 512e to vSphere and vSAN are not supported due to potential performance issues when using these drives.” in VMware KB: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2091600.

        I have a test environment configured as:
        1. Three ESXi hosts running VSAN 6.2
        2. Each host has a SSD as cache tier and HDDs as capacity tier.
        3. The SSD is 512e format and HDD is 512n format.

        Here comes my questions:
        1. My environment runs VSAN 6,2 well with a 512e format SSD device configured. This seems to prove that the 512e format SSD plays the role of 512n format device as a downgrade solution because the 512e format device shouldn’t support VSAN 6.2. Right?
        2. The 512e doesn’t matter with SSD in VSAN 6.2, right?

  5. Great post! A question about the witness node. Does it have to be in the same vCenter SSO-domain as the VSAN cluster? What I mean is do we have to point at it as a vCenter object or can we point to its IP address?

    Cheers!

      1. That’s interesting. I was under the impression one hade to point out the witness node object when configuring the fault domains? To clarify I’m talkning about a 2-node vSAN ROBO cluster.

        1. Ah – now I see what you mean.

          The witness must be added to the VC inventory for sure. However it is not part of the vSAN cluster, but rather it is added as a separate host. It can be added to VC with FQDN or IP address.

          1. We would like to install a vSAN 2-node cluster as a HCI-package for a small customer who does not have a central vCenter. We are struggling a bit with the witness and its requirements of being placed outside of the vSAN cluster, but within vCenter. As it looks now we should probably go for a 3-node vSAN cluster which of course will get more expensive.

  6. Hi Cormac, I am a VCI and I always recommend your excellent blog to my students.
    Today they asked me a very interesting question regarding the 2 node cluster with direct connection. Since you redirect the witness traffic to other vmkernel, what happens if you lose the direct link between the 2 nodes?

    In that case you have a split brain situation and both hosts may contact the witness, and you don’t have a preferred host, do you?

    I could not find anything on the admin guide for 6.5 (or I didn’t look for well)

    Thanks for this great blog!!

    1. You get the same behavior as in vSAN stretched cluster – the cluster will form between the “preferred site” and the witness.

      1. I understand that you have to configure a stretched cluster with one node at each site?

        That makes sense.

        Thanks

Comments are closed.