Virtual SAN (VSAN)

A quick reference to VSAN (Virtual SAN) posts


  • A link to the Essential VSAN book (second edition) that I co-authored with Duncan Epping. This should be available in Q2, 2016
  • A link to the Essential VSAN book (first edition)

Blog translations


81 thoughts on “Virtual SAN (VSAN)

  1. Pingback: VSAN Part 3 – It is not a Virtual Storage Appliance |

  2. Pingback: Storage is Sexy (Again): Thank you VSAN | I Tech, Therefore I Am

  3. Pingback: VSAN Part 7 – Capabilities and VM Storage Policies |

  4. Pingback: VSAN Part 8 – The role of the SSD |

  5. Pingback: VMware vSphere Virtual SAN VSAN | chassdesk

  6. Pingback: VSAN Part 9 – Host Failure Scenarios & vSphere HA Interop |

  7. Pingback: VSAN Part 10 – Changing VM Storage Policy on-the-fly |

  8. Pingback: VSAN Part 10 – Changing VM Storage Policy on-the-fly | Storage CH Blog

  9. Pingback: A list of VSAN references – from VMware bloggers | VMware vSphere Blog - VMware Blogs

  10. Pingback: VMware vSphere Blog: A list of VSAN references – from VMware bloggers | System Knowledge Base

  11. Pingback: Virtual SAN webinars, make sure to attend!

  12. Pingback: VMware vSphere Virtual SAN VSAN | Bring Your Own Brain

  13. Pingback: VMware’s Storage Portfolio |

  14. Pingback: A closer look at EMC ViPR |

  15. Pingback: Vmware VSAN |

  16. Pingback: VSAN Part 12 – SPBM extensions in RVC |

  17. Pingback: VSAN Part 13 – Examining the .vswp object |

  18. Pingback: An Introduction to Flash Technology |

  19. Pingback: VSAN Part 16 – Reclaiming disks for other uses |

  20. Pingback: Looking for Radically Simple Storage? | Virtual Insanity

  21. Pingback: Tech Blast #06 - Dave Does What He Wants | Wahl Network

  22. Can you give us some detail on calculating Disk Yield? If I have 3 modes with 1tb, will I see 3 tb storage? does a VM that uses 50gb of storage take up 50gb, or 100 gb, or 150 gb?

    • There should be a sizing guide going live shortly, but all magnetic disks across all the hosts will contribute to the size of the VSAN datastore. The SSDs (or flash devices) do not contribute to capacity. So if you had 1TB of magnetic disk in 3 nodes, your VSAN datastore will be 3TB.

      The amount of disk consumed by your VM is based primarily on the failures to tolerate (FTT) setting in the VM Storage Policy. An FTT of 1 implies 2 replicas/mirrors of the VMDK. Therefore a 50GB VMDK created on a VM with an FTT=1 will consume 100GB. A 50GB VMDK created on a VM with an FTT=2 will make 3 replicas/mirrors and therefore consumes 150GB. Hope that makes sense. Lots of documentation coming around this.

  23. Pingback: VMware announces VSAN GA, supports 32 host, up to 2M IOPS - Virtxpert

  24. Pingback: VMware Virtual SAN (VSAN) Launched | Blue Shift

  25. Pingback: VSAN Part 18 – VM Home Namespace and VM Storage Policies |

  26. Pingback: VSAN-tastic! | SplitBrained

  27. Pingback: Architecture Virtualisée : le VSAN est-il l'avenir ? | EMC FRANCE

  28. Pingback: Newsletter: April 5, 2013 | Notes from MWhite

  29. Pingback: Server SAN? What is it for? Will it stick around? And what exactly is it? | Notes from MWhite

  30. Pingback: VSAN Deploy and Manage links | Dennis Bray's Virtual Place

  31. Hi Comac,

    Need to understand on the “Note” of VSAN Part 9 topic:

    On the vSphere HA interop:

    ….”Note however that if VSAN hosts also have access to shared storage, either VMFS or NFS, then these datastores may still be used for vSphere HA heartbeats”

    If for example all the VSAN hosts also have VMFS shared datastore(s) (say using FC SAN), then I can have TWO kind of HA protections which are if the VM located on the VSAN datastore then it gets VSAN HA protection and if the VM located on the shared VMFS datastore then it gets a traditional HA protection?


  32. Pingback: Evolution of Storage – What is your strategy? |

  33. Pingback: WordPress – Weekly Dump (weekly) | ResponsiPaul

  34. Pingback: Baby Dragon Triplets – VSAN Home Lab |

  35. Pingback: vCenter Operations Manager and vSphere Data Protection Interop |

  36. Pingback: Como montar un lab de VSAN sobre Workstation – Parte 1 | Prio´s Blog

  37. Pingback: Horizon 6 - Whats New |

  38. Pingback: VSAN Quick-Config Guide |

  39. Just to clarify on the whole disk consumption based on the FTT setting…going back to your example of a FTT=1 for a 50GB VM….

    Are you saying that it will consume an additional 100GB of space due to the 2 replicas created?…or are you saying that the original VM (VMDK) that is created is counted as one of those replicas?

    “therefore a 50GB VMDK created on a VM with an FTT=1 will consume 100GB”

    In regards to being completely clear, would it be better to say

    will consume an extra 100GB in addition to the 50GB VM (VMDK)”?

    I’ve done countless days and days of researching for the past ~6 months or so but every time I hear that, it throws me off on my understanding of FTT > disk consumption.

    Thank you in advance for your time, if you choose to respond.

    *I read your book BTW, you and Duncan Epping are rockstars in the world of virtualization….really good read. Couldn’t have asked for more.


    • It means that 2 x 50GB replicas are created for that VMDK James, meaning 100GB in total is consumed on the VSAN datastore (not an additional 100GB). Note however that VMDK are created as thin provisioned on the VSAN datastore, so it won’t consume all of that space immediately, but over time.

      Thanks for the kind words on the book – always nice to hear feedback like that.

      • Thanks for the reply and clarification…so to make sure I get this right, there will be a single VMDK for the actual VM running in the environment BUT since VSAN is in use, if your FTT=1, then 100GB will be consumed by the 2 replicas that are created (over time with thin provisioning).

        I think my confusion is in the semantics of how every everyone explains it.

        • Yep – you got it. A single 50GB VMDK, made up of two 50GB mirrors/replicas, each replica sitting on a different disks (and host) but the same datastore and eventually consuming 100GB in total on the VSAN datastore

  40. I have a question for you regarding Part 13 in which you refer to “the VM swap file” and the “swap object”. How does the vmx-*.vswp file fit into all this? This file was introduced in 5.0. Does this file belong in the swap object? Is there a second swap object for it? Or does it simply belong to the VM namespace object?

    • Yes – this is what we are referring to. This is now instantiated as its own object on the VSAN datastore, and does not consumes space in the VM namespace object.

  41. Pingback: A closer look at Maxta |

  42. Pingback: VMware VCP 5.5 Delta Exam (#VCP550D) Passed! | The Virtual Unknown

  43. Pingback: VSAN Cluster – Shutting down

  44. Pingback: All Things Virtual SAN | vmnick

  45. Pingback: vSphere 6.0 Storage Features Part 4: VMFS, VOMA and VAAI |

  46. Hi Cormac,
    A question about the “Virtual SAN 6.0 Design and Sizing Guide”. On page 46 it states ‘For hybrid configurations, this setting defines how much read flash capacity should be reserved for a storage object. It is specified as a percentage of the logical size of the virtual machine disk object.’. So a percentage of the logical size (used storage). The example on page 47 takes the flash read cache reservation as a percentage of the physical space (allocated storage). What is the truth?


    • These statements are meant to reflect the same thing Stevin. When I say that it is a “percentage of the logical size”, this is not the same as “used storage”.

      All VMDKs on VSAN are thin by default. they can be pre-allocated (made thick) through the use of the Object Space Reservation capability-.

      However, whether you use that or not, you request a VMDK size during provisioning, e.g. 40GB. Now you may only use a portion of this, e.g. 20GB, as it is thin provisioned.

      But Read Cache is based on a % of the requested size (logical size/allocate storage), so 40GB. Hopefully that makes sense.


  47. Hi Cormac,
    regarding your book Essential VSAN, excellent book btw. The book states: In the initial version of VSAN, there is no proportional share mechanism for this resource when multiple VMs are consuming read cache, so every VM consuming read cache will share it equally. how must i read this? will the total flash read cache size be devided by the number of VMs consuming VSAN storage and that is the amount of flash read cache each VM gets? (this would be a problem for read intensive VMs with more storage than average)
    What about the write cache? Every write has to go through the write cache i presume? How is write cache shared between VMs?

    thanks again.

  48. Hi Cormac
    A question about the ratio for SSD and HDD numbers. what’s the best number for the ratio? From the system level view, If only one HDD I belived the performance will not good( your data will gating on one HDD interface). but if the HDD disk is around 10, The SSD couldn’t provide enough cache to all. Just wonder if there’s a perfect ratio?

    • It is completely dependent on the VMs that you deploy. If you have very I/O intensive VMs each with large working sets (data is in a state of change), then you will need a large SSD:HDD ratio. If you have very low I/O VMs with quite small working sets, you can get away with a smaller SSD:HDD capacity. Since it is difficult to state which is the best for every customer, we have used a 10% rule-of-thumb to cover most virtualized application workloads.

      • Appreciate Cormac, understand the ratio will determine the performance. And user configure it with their application case. it provide flexibly to users.
        I may didn’t make it clear.
        The ratio here I mentioned is physical device number not capacity number.
        Or the Performance has no relationship with physical devices number ratio, only affected by SSD:HDD capacity ratio?

        • This is one of those depends answers Lyne.

          If all of your writes are hitting the cache layer, and all of your reads are also satisfied by the cache layer, and destaging from flash to disk is working well, then 1:1 ratio will work just fine.

          If however you have read cache misses that need to be serviced from HDD, or there is a large amount of writes in flash that need to be regularly destaged from flash to HDD, then you will find that a larger ratio, and the use of striping across multiple HDDs for your virtual machine objects can give better performance.

  49. Yes, Cormac.
    That’s my concern. we’re struggling with 1SSD:4 HDD and 1SSD:5HDD performance difference.
    I think if we have big SSD, it should have less possibility to miss cache.
    Even the cache missed, the 4HDD and 5HDD shouldn’t affect big right?
    Maybe need to set up environment and collect some test data. :-)

  50. Pingback: » Storage Field Day 7 – Day 2 – VMware

  51. Pingback: Booting vSphere ESXi 6.0 From USB Stick To Successfully Build a VSAN 6.0 – Including An Apple Mac Mini Late 2014 (7,1) | Cloud Jockey

  52. hi Cormac
    I run VSAN with 3 Host’s
    and also config LACP between server and switch
    SSD Samsung pro 850 with 512GB of size
    after run I tested copy speed between 2 VM that run in VSAN and the speed is between 20MB to 60 MB
    What is my problem
    Notice in Smart Storage Administrative I disabled Caching from SAS and SSD disk, then test speed and speed was very bad
    again delete arrays and create with cached enable and again speed was very bad
    Please help

    • Hi Morteza,

      Noticed you’re using the same Samsung consumer grade SSDs that I thought would work. Are you using them as the caching tier or as the capacity drives? In my case, I used them as the caching tier and had all sorts of issues, even down to Permanent Disk Loss errors randomly appearing, requiring a host reboot. I’ve since moved them to the capacity tier and put in some Enterprise SSDs and so far, haven’t had any further issues.



  53. I posted this on the VM/Host affinity groups, but didn’t get a reply. I’m looking at setting up a VSAN stretched cluster. Can you help answer this?


    How does VM/Host affinity groups work with fault domains? I’m looking at setting up a VSAN, and setting the fault domain for site A to be site B. As I understand, by doing that, when I set FTT=1, the data will be replicated to site B instead of to another node at site A. This is to cover the case where we lose the entire rack at Site A. The VMs will be able to reboot at Site B off of the replicated data at Site B.

    If I were to use VM/Host affinity groups, then wouldn’t I need to replicate to a second node at site A? Would that mean setting FTT=2, and it would replicate to a node at site A, and a node at site B? Maybe VM/Host affinity groups don’t work when using fault domains. Can you help me sort that out?

    • First, VSAN Stretched Cluster only supports FTT=1. Fault Domains and FTT work together.

      If you have a failure on site A, the VM/Host Affinity rules will attempt to restart the VM on the same site, i.e. site A.

      If you have a complete site failure (e.g. lost power on site A), the VM/Host affinity rules will then attempt to restart the VM on the remote site, i.e. site B.

      You still need to use fault Domain with Stretched Cluster, but simply as a way of grouping hosts on each site together.

      This should be well documented in the stretched cluster guide. There is also a PoC guide due to be released very soon which will provide you with further detail.

      • Thanks for your reply.

        So if a stretched cluster has FTT=1, then doesn’t that mean it will only replicate data to another node at site B? If it only replicates to another node at site B, and a node at site A goes down, how will VM/HA rules be able to restart the VM on the same site A?

      • Hi,

        Can you say if removing this limitation of Stretched Cluster (FTT only =1) is on the roadmap? We are looking at implementing it but would like to have 2 copies on the primary site + 1 on the secondary (or maybe 2+2 active-active configuration)

        Thanks, Vjeran

          • This didn’t make it in 6.2 release? Sometimes those details don’t get advertised at launch, so I’m still hoping.. 😉

  54. Pingback: Why you can, and should, write a book

  55. Pingback: Architecting IT | Is it time for VMware to Open Source The ESX Hypervisor?

  56. Pingback: VMware VSAN 6.2, what’s new? | VDICloud

  57. Pingback: De VSAN 6.1 à VSAN 6.2 … quelle aventure ! –

  58. Pingback: VSAN Cormac Blog 〜 VSAN におけるオブジェクトとコンポーネントの考え方〜 - Japan Cloud Infrastructure Blog - VMware Blogs

  59. Pingback: VSAN Cormac Blog 〜VASAの役割〜 - Japan Cloud Infrastructure Blog - VMware Blogs

  60. Pingback: VSAN Cormac Blog 〜手動・自動モードについて〜 - Japan Cloud Infrastructure Blog - VMware Blogs

  61. Pingback: FlashSoft I/O Filter VAIO Setup Steps -

Leave a Reply