VSAN Part 2 – What do you need to get started?

vsan flyer frontThis post contains the list of items you will need to get started with VSAN. I’ll also try to highlight some best practices when it comes to configuring VSAN. First off, lets start with the software requirements – those are the easy bits. You will need ESXi version 5.5 and vCenter server version 5.5. The vCenter server can be either the Windows version or the appliance version; both support VSAN. Finally you will need to familiarize yourself with the vSphere web client if you haven’t already done so. VSAN can only be managed from the vSphere web client; it is not supported in the older C# client.

From a hardware perspective, you will need at least 3 ESXi hosts. You will also need at least one HDDs (magnetic disks) per host and at least one (SSDs) Solid State Disks per host. At least 3 ESXi hosts in the VSAN cluster must contribute local storage to the vsanDatastore, and this local storage must be a combination of SSD and HDD. There are a couple of best practices to call out here:

  1. VMware recommends that there is at least a 1:10 ratio of SSD:HDD. Depending on the working sets of your virtual machines, you may need to make this ratio higher.
  2. While VSAN can scale out on compute only, i.e. a subset of ESXi hosts providing compute and storage and additional ESXi hosts providing compute only, VMware recommends as a best practice that all hosts in the VSAN cluster be configured similarly if not identically from a storage and compute perspective.

With that in mind, the choice of SSD is essential to VSAN performance. VMware is providing a HCL which will grade SSDs on performance. Obviously the higher the performance (number of sustained write operations per second), the better the performance of your virtual machines and the higher the density of virtual machines per host. However, lower grade SSDs will work just fine, on the understanding that your VM density will be lower. It should also be noted that a number of PCIe based SSDs are also supported. Another consideration when choosing SSDs should be the longevity or lifetime guarantee that comes with the SSD. Choose your SSD carefully.

The next item to highlight is the disk controller. VSAN implements a distributed RAID configuration across all hosts in the cluster. This means that if a node fails (taking its local storage with it of course), virtual machines still have a full complement of data objects available and can continue to run (very much like RAIN – Redundant Array of Independent Nodes). How is this related to the disk controller you might ask? Well, the disk controller needs to be able to run in pass-thru mode or HBA mode; this allows the physical disks to be passed directly through to the ESXi host without a layer of RAID on top. VSAN will take care of the RAID requirements. Again, the HCL should be referenced for further details on supported controllers that work in pass-thru/HBA mode.

Finally, your network needs to be considered. VMware is supporting both 1Gb and 10Gb network interconnect between ESXi hosts. VSAN requires that a VMkernel port is configured to carry VSAN traffic between hosts. Again, although we will support both, 10Gb would be a recommended best practice for production environments. But, for the initial beta (see http://vsanbeta.com), either will suffice. If you do go with the 1Gb, then you need to keep this in mind when measuring VSAN performance, as the network may be the bottleneck if you deploy a large VSAN cluster and lots of virtual machines with policy settings that generate a lot of VSAN network traffic.

That’s it – with these requirements met, you are good to go with VSAN. To see some demos on VSAN, and steps on how to configure it, check out these online product walk-throughs here.

27 Replies to “VSAN Part 2 – What do you need to get started?”

  1. Cormac, apologies if this is answered elsewhere but I’ve heard conflicting reports regarding this. Can you confirm if TRIM is supported in vSphere 5.5 – specifically for both the VSAN and the vFRC features?

    1. Is this in the context of reclaiming dead space Sam? If so, then I’m pretty sure the SCSI UNMAP command gets converted to TRIM when used on SSD devices. At least, that is what I was led to believe when I asked this question some time back. Let me know if your query is different to this.

      1. Hi Cormac – more specifically regarding performance. My concern is that, especially on the consumer level drives (we’re looking at for test environments) that rely on TRIM because of their lackluster garbage collection, if vSphere isn’t proactively issuing TRIM commands then write performance is going to fall through the floor.

        I suppose the answer in this situation would be to ensure that “esxcli storage vmfs unmap” is run regularly?

  2. Oh, also forgot to ask, any chance of supporting Hardware RAID cards in the future? Perhaps some kind of VASA like interface would allow vSphere to determine the underlying disk topology of the RAID set?

    I ask because in the case of HP, there is no HBA mode I am aware of today with their Smart Array controllers. You effectively have to configure a Single Disk RAID 0 for each drive to meet these requirements.
    Would be much easier if we could just present an SSD Logical Drive and a HDD Logical Drive and let the HW based RAID card still work its magic.

    With the SSD Write Caching being done in vSAN, underlying HDD arrays using RAID5 are no longer as big a deal for performance as they would be otherwise.

    1. Problem with using RAID itself ontop of VSAN is the additional cost/overhead from a capacity perspective.

      Some HP controllers will be supported in what is called “pseudo pass through”. That will probably mean defining RAID-0 disks for individual disks though.

  3. Hi Cormac,

    Would VSAN ever look to having a dedicated write SSD and read SDD individually implemented in the future?

    I’m thinking because of the newer but slower NAND technologies tend to decrease in Program/Erase cycles and the increase in time it takes to do a P/E, in a write intensive environment, would it make sense to use a SLC based device and have a MLC/TLC based device for reads?

  4. Cormac,

    Question about how VSAN is marketed. Will it be a future feature of ESXi 5.5 or an add-on? If add-on, what version of 5.5 will support it? Will Enterprise be a requirement?

  5. Hi Cormac

    “You will also need at least one HDDs (magnetic disks) per host and at least one (SSDs) Solid State Disks per host. ”

    At least one SSDs per host or per disk group?

    Axl

    1. Hi Axl,
      You will need a minimum of one SSD per host. However, there can only ever be one SSD in a disk group. Therefore, with a maximum of 5 disk groups per host, you can have a maximum of 5 SSDs per host. Hope that makes sense.

  6. I was looking at all the capabilities of VSAN and I was wondering how much CPU and RAM resources are used by a server when you use VSAN? For example the 100K+ IOPS value must consume some CPU resources — I looked in the blogs and the on-line documentation and did not find it.

    1. Hi Gary,
      The design goal was that VSAN overhead should never consume more than 10% of resources on a given host. Testing so far has revealed that the overhead is much lower than that.

  7. Cormac, great webinar this morning. I ran out of time to ask a question during the webinar, so I hope this is ok.
    Can there be more than 1 VSAN datastore? If so, can 1 server be part of more than 1 VSAN datastore — can a VM be automatically moved between datastores? Can a VM get storage from more than one datastore?

    1. Glad you enjoyed it Gary. No, with the 1.0 there will only ever be a single VSAN datastore. To have multiple datastores, you would need to create multiple VSAN clusters, each with their own datastore.

      However, VSAN may have SAN/NAS datastores presented to the same hosts that are in the VSAN cluster. In that case, you can migrate VMs between the VSAN datastore and the SAN/NAS datastores using Storage vMotion.

      And while yes, a VM may have storage on both VSAN and SAN/NAS, its probably not something we would recommend as the storage on SAN/NAS could not be protected by policies in the same way as the storage objects on VSAN. I suspect we’d want all of the VM to be on either one or the other.

Comments are closed.