Virtual SAN (VSAN) Announcement Review

Very exciting day today at VMware – Virtual SAN or VSAN has been officially announced.

If you weren’t able to attend, here are some of the interesting parts of the announcement:

  • 32 node support (up from the 16 node support announced at Partner Exchange last month, and up from the 8 nodes which we supported during the beta)
  • 2 million IOPS (using IOmeter 100% read, 4KB block size).
    • Also 640K IOPS achieved with 70/30 read/write ratio, 4KB block size and 80% random.
  • 3200 virtual machines (100 per node)
  • 4.4 PB of storage (using 35 disk per host x 32 hosts per cluster)

We were also told that there was linear scaling of performance from a 4 node cluster up to a 32 node cluster.

And we saw interoperability with vSphere Replication for DR. There is also interoperability with vSphere Data Protection for backups, vMotion, DRS, HA, VMware View, etc.

It seems that we’ll GA VSAN during the week of March 10th (next week).

The one thing which wasn’t shared yet was pricing/licensing details. Guess we’ll have to wait for GA to get that info. All very exciting.

29 Replies to “Virtual SAN (VSAN) Announcement Review”

    1. So linked clones are not an issue as we support View. Just not sure about SE Sparse Disks. I’ll dig further and see what I can find.

      1. Just had it confirmed by the product team. Although we will support VMware View at GA, there will be no support for SE Sparse Disks in the initial release.

  1. Max 100 VM’s on a node? That doesn’t sound good for a VDI environment where you would like round 150 or maybe even more per node…

  2. Which SSD’s were used to get 2 Million IOPS with 32 Nodes? Was it one SSD per host ? 2.000.000 IOPS / 32 SSD = 62500 IOPS . Are this SATA SSDs or PCIe SSD’s

  3. Cormac,

    Congratulations on a great launch!

    Few questions:

    (1) Were the IOs random or sequential?
    (2) What is the working set size? Breaking down the question further:
    2.A. What was the size of the VMDK used for testing?
    2.B. Did IOmeter access the entire VMDK or it accessed only a part of it? If latter, what% of the VMDK was accessed?
    (3) In the case of 100% random reads, was the VMDK pre-populated before the reads were issued? Or were the reads coming from a sparse VMDK?
    (4) How long did the tests run?
    (5) How many physical cores did each host have? What % of the host CPU was consumed by vSAN?

    Thanks.

    Raghu Shastry

    1. Correct – it is not site aware in the initial release. Both failure domains and stretched/metro solutions are being looked at for future releases.

  4. did they also do a relevant VDI style test? 100% reads is good for stats but not very representative for VDI. if performance drops from 2mln to 640k when 100% reads become 70% reads, should I be worried when I put a VDI workload on there?
    What would the IOPS be when real world VDI averages are used like a 15/85% R/W with 4kB and 64kB blocks that average at 12kB and are 100% random?

Comments are closed.