I wouldn’t normally call out new patch releases in my blog, but this one has an important fix for Virtual SAN users. As per KB article 2102046, this patch addresses a known issue with clomd. The symptoms are as follows: Virtual machine operations on the Virtual SAN datastore might fail with an error message similar to the following: create directory <server-detail>-<vm-name> (Cannot Create File) The clomd service might also stop responding. Virtual SAN cluster might report that the Virtual SAN datastore is running out of space even though space is available in the datastore. An error message similar to the…
We made a number of enhancements to Storage DRS in vSphere 6.0. This article will discuss the changes and enhancements that we have made. There is a white paper which discusses many of the previous limitations of Storage DRS interoperability and I’d recommend reviewing it. Although a number of years old, it highlights many of the Storage DRS interoperability concerns. As you will see, a great any of these have now been addressed, along with some pretty interesting feature enhancements.
Although most of my time is dedicated to Virtual SAN (VSAN) these days, I am still very interested in the core storage features that are part of vSphere. I reached out earlier to a number of core storage product managers and engineers to find out what new and exciting features are included in vSphere 6.0. The first feature is one that I know a lot of customers are waiting on – NFS v4.1. Yes, it’s finally here.
One policy setting that I have yet to discuss in any great detail in my blog posts about VSAN. The ForceProvisioning policy setting, when placed in the VM Storage Policy, allows Virtual SAN to violate the NumberOfFailuresToTolerate (FTT), NumberOfDiskStripesPerObject (SW) and FlashReadCacheReservation (FRCR) policy settings during the initial deployment of a virtual machine. This can be useful for many reasons. One reason is that it enables the boot-strapping of a vCenter server on a VSAN deployment as highlighted by William Lam in this excellent blog post on the subject. Another reason is that it allows the provisioning of virtual machines…
I was having some discussions recently on the community forums about Virtual SAN behaviour when a VM storage policy is changed on-the-fly. This is a really nice feature of Virtual SAN whereby requirements related to availability and performance can be changed dynamically without impacting the running virtual machine. I wrote about it in the blog post here. However there are some important considerations to take into account when changing a policy on the fly like this.
I was involved in an interesting case recently. It was interesting because the customer was running an 8 node cluster, 4 disk groups per host and 5 x ~900GB hard disks per disk group which should have provided somewhere in the region of 150TB of storage capacity (with a little overhead for metadata). But after some maintenance tasks, the customer was seeing only 100TB approximately on the VSAN datastore. This was a little strange since the VSAN status in the vSphere web client was showing all 160 disks claimed by VSAN, yet the capacity of the VSAN datastore did not…
As part of a quick reference proof-of-concept/evaluation guide that I have been working on, it has become very clear that one of the areas that causes the most confusion is what happens when a storage device is either manually removed from a host participating in the Virtual SAN cluster or the device suffers a failure. These are not the same thing from a Virtual SAN perspective. To explain the different behaviour, it is important to understand that Virtual SAN has 2 types of failure states for components: ABSENT and DEGRADED.