I am presenting at a number of upcoming VMware User Group (VMUG) conferences over the next month or so. I’ll be basically doing my pitch on Virtual SAN (VSAN) along with some demos on install and configuring.
On November 21st, I’ll be presenting at the UK National VMUG (for the 3rd year in a row) at the National Motorcycle Museum in Solihull. More information about the UK National VMUG here, including registration.
On December 3rd, I’ll be at the Nordic VMUG in Copenhagen, Denmark. This is being held at the Bella center, where VMworld was held before Barcelona. More information about the Nordic VMUG here, including registration.
Last but not least, on December 5th, I will be near Antwerp for the Belgium VMUG. More information about the Belgium VMUG here, including registration.
I hope to see many of you there, and show off the Virtual SAN product to you. Have your questions ready.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage
This is quite a unique aspect of VSAN, and plays directly into what I believe to be one of the crucial factors of software defined storage. Its probably easier to use an example to explain the concept of how being able to change storage policies on the fly is such a cool feature.
Let’s take a scenario where an administrator has deployed a VM with the default VM storage policy, which is that the VM Storage objects should have no disk striping and should tolerate one failure.The layout of the VM could look something like this:
The admin then notices that the virtual machine deployed on their VSAN is getting a 90% read cache hit rate. This implies that 10% of reads need to be serviced from HDD. At peak time, this VM is doing 2000 read operations per second. Therefore, there are 200 reads that need to be serviced from HDD (the 10% of reads which are cache misses). The specifications on the HDDs imply that each disk can do 150 iops, meaning that a single disk cannot service these additional 200 iops. To meet the I/O requirements of the VM implies that a stripe width of two disks should be implemented.
In this next post, I will examine some failure scenarios. I will concentrate of ESXi host failures, but suffice to say that a disk or network failure can also have consequences for virtual machines running on VSAN. There are two host failure scenarios highlighted below which can impact a virtual machine running on VSAN:
- An ESXi host, on which the VM is not running but has some of its storage objects, suffers a failure
- An ESXi host, on which the VM is running, suffers a failure
Let’s look at these failures in more detail.
I will start with a caveat. The plan is to support both Solid State Disks and PCIe flash devices on VSAN. However, for the purposes of this post, I will refer to this flash resource as an SSD for simplicity.
SSDs serve two purposes in VSAN. They act as both a read cache and a write buffer. This dramatically improves the performance of virtual machines running on the vsanDatastore. In some respects VSAN can be compared to a number of ‘hybrid’ storage solutions in the market, which also use a combination of SSD & HDD to boost the performance of the I/O, but which have the ability to scale out capacity based on low-cost HDD. Read on to learn more about read cache and write buffering on VSAN.
In this post, the VSAN capabilities are examined in detail. These capabilities, which are surfaced by the VASA storage provider when the cluster is configured successfully, are used to set the availability, capacity & performance policies on a per VM basis when that VM is deployed on the vsanDatastore. There are 5 capabilities in the initial release of VSAN, as shown below.
I will also try to highlight where you should use a non-default value for these capabilities.