This is a question that has come up a number of times. Many of you will now be familiar with the VM Storage Policy capability Number Of Failures To Tolerate for VSAN, which defines how many failures can occur in the VSAN cluster and still provide a full copy of the data to allow a virtual machine to remain available. In this short post, I will explain how many physical ESXi hosts you need to accommodate the Number of Failures To Tolerate requirement in the VM Storage Policy.
In this post, we talk about a particular behaviour with using the default (or None) policy with VSAN. I have stated many times in the past that when a VM is deployed on the VSAN datastore, it behaves like it is thinly provisioned unless the capability ‘Object Space Reservation’ (OSR) is specified in the VM Storage Policy. The OSR will pre-allocate space on the VSAN datastore for the virtual machine’s storage objects, and is specified as a percentage of the actual VMDK size. However, there is a slightly different behaviour when the default policy is used. Once again, I was…
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to…
I’ve been doing a bit of work over the past number of weeks on the adapters for vCenter Operations (vC OPs) with my old pal Paudie. We are working on vCenter Operations 5.8 and using a vSphere 5.5U1 environment. Since we have a Brocade Fibre Channel switch and an EMC VNX array in our lab, I wanted to get the Management Pack for Storage Devices (MPSD) and the Brocade SAN Analytics Management Pack deployed, and see what information we could glean from those extension packs. When we completed the configuration, we were able to go into the vC OPs customs…
Those of you familiar with VSAN will know that one of the capabilities which can be placed in a VM Storage Policy is Number of Disk Stripes Per Object (stripe width for short). I covered this in an earlier post which looked at the various VSAN capabilities. Recently, a customer who had not specified a stripe width in the VM Storage Policy was perplexed to find that his storage objects had indeed been striped across a number of disks. He reached out to me if I could provide an explanation.
I watched a very cool demonstration this morning from the All Flash Array vendor, SolidFire. I spoke with SolidFire at the end of last year, and did a blog post about them here. One of the most interesting parts of our conversation last year was how SolidFire’s QoS feature and VMware’s Storage I/O Control (SIOC) feature could inter-operate. In a nutshell, QoS work at the datastore/volume layer whereas SIOC deals with the VM/VMDK layer. Last week, Aaron Delp and Adam Carter of SolidFire did an introduction to QoS, both on vSphere and on the SolidFire system. And they also did…
I was having a conversation with one of our tech support guys (Greg Williams) recently about the relaxation on the requirement to allow Raw Device Mappings (RDMs) to be presented to different hosts using different SCSI identifiers and still do vMotion operations in vSphere 5.5. You can read that post here where I described how the restriction has been relaxed. Greg mentioned that he was handling a case where customers wished to share a physical mode/passthru RDM between VMs on different ESXi hosts with a view to running Microsoft Clustering Services (MSCS) on top. We call this CAB or Clustering…