Announcing the Virtual SAN 6.0 Health Check Plugin

health-checkToday VMware announces the Virtual SAN 6.0 Health Check Plugin, a feature that will check your Virtual SAN configuration, both proactively and re-actively, and highlight any abnormal conditions found in the cluster. This is available to all our VSAN customers right now. Not only does it check the health of the cluster, but it also checks the state of the network, host connectivity, physical disk status, and underlying virtual machine object state. This is a great tool for ensuring that an initial deployment of VSAN or proof-of-concept has been rolled out successful, giving you confidence in your VSAN deployment. It is also useful for ongoing monitoring and maintenance of your Virtual SAN cluster.

Continue reading

vSphere 6.0 Storage Features Part 7: VAAI XCOPY improvements

The more astute of you who have already moved to vSphere 6.0, and like looking at CLI outputs, may have observed some new columns/fields in the PSA claimrules when you run the following command:

# esxcli storage core claimrule list --claimrule-class=VAAI

The new fields are as follows (slide right to view full output):

XCOPY Use Array     XCOPY Use              XCOPY Max
Reported Values     Multiple Segments      Transfer Size 
---------------     -----------------      -------------- 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0

Continue reading

Virtual Volumes (VVols) – Syslog and Scratch Usage

VVolsI had a very interesting query in my recent VVol post on vSphere HA interop. In that post I showed how the VVol datastore could be used for datastore heartbeating. The question then arose when the VVol datastore could be used for other things, such as a syslog and scratch destination. I couldn’t see any reason why not, but just to be sure, I tested it out in the lab. The quick answer is yes, you can use a Config-VVol for syslog, and no, you cannot use a Config-VVol for scratch. If you want to see the steps involved, and what happens when you do set a Config-VVol as the destination for these features, please read on.

Continue reading

When and why do we “stun” a virtual machine?

stunThis is a question that seems to come up regularly, but I don’t think it appears in any great detail in external facing documentation. The question is “when do we stun (or in other words, quiesce) virtual machines”, why do we do it, and more importantly, how long can a stun operation take? One of our staff engineers, Jesse Pool, put together some really good explanations around the VM stun operation, which I am leveraging for this post. I took some particular interest in this as I wrote a bunch of snapshot posts recently around Virtual Volumes (VVols) so I think this fits in quite nicely. A “stun” operation means we pause the execution of the VM at an instruction boundary and allow in-flight disk I/Os to complete. The stun operation itself is not normally expensive (typically a few 100 milliseconds, but it could be longer if there is any sort of delay elsewhere in the I/O stack).

Continue reading

Virtual Volumes (VVols), vSphere HA and Heartbeat Datastores

VVolsI had a few queries recently on how Virtual Volumes (VVols) worked with vSphere HA. In particular, I had a number of questions around whether or not VVol datastores could be used as a heartbeat datastore by vSphere HA. The answer is yes, the VVol datastore can be used for vSphere HA datastore heartbeating. If you want to see how, please read on.

I think these queries may have arisen due to the fact that we do not use datastore heartbeating with Virtual SAN (VSAN). Just by way of reminder, the master host in a vSphere HA cluster uses a heartbeat datastore when it can no longer communicate with a slave host over the management network. This allows the master to determine things like whether a slave host has failed and is down, or if it is in a network partition, or if it is network isolated. This then allows the master to make decisions about what to do with the VMs that reside on the slave host. If the slave host has stopped datastore heartbeating, it is considered to have failed and its virtual machines are restarted elsewhere in the cluster.

This isn’t possible in VSAN due to the fact that storage is local to a host, thus if a host is partitioned and it updates heartbeats on its local storage, there is no way for the other others in the VSAN cluster to see it. VVol datastore is shared storage, which is why datastore heartbeating works. Other hosts can see the updates on storage even if the host is off the network.

Continue reading

VSAN 6.0 Part 9 – Proactive Re-balance

scalesThis is another nice new feature of Virtual SAN 6.0. It basically is a directive to VSAN to start re-balancing components belonging to virtual machine objects around all the hosts and all the disks in the cluster. Why might you want to do this? Well, it’s very simple. As VMs are deployed on the VSAN datastore, there are algorithms in place to place those components across the cluster in a balanced fashion. But what if a hosts was placed into maintenance mode, and you requested that the data on the host be evacuated prior to entering maintenance mode, and now you are bringing this node back into the cluster after maintenance? What about adding new disks or disk groups to an existing node in the cluster (scaling up)? What if you are introducing a new node to the cluster (scaling out)? The idea behind proactive re-balance is to allow VSAN to start consuming these newly introduced resources sooner rather than later.

Continue reading

VSAN 6.0 Part 8 – Fault Domains

One of the really nice new features of VSAN 6.0 is fault domains. Previously, there was very little control over where VSAN placed virtual machine components. In order to protect against something like a rack failure, you may have had to use a very high NumberOfFailuresToTolerate value, resulting in multiple copies of the VM data dispersed around the cluster. With VSAN 6.0, this is no longer a concern as hosts participating in the VSAN Cluster can be placed in different failure domains. This means that component placement will take place across failure domains and not just across hosts. Let’s look at this in action.

Continue reading