Site icon CormacHogan.com

VSAN 6.0 Part 9 – Proactive Re-balance

scalesThis is another nice new feature of Virtual SAN 6.0. It basically is a directive to VSAN to start re-balancing components belonging to virtual machine objects around all the hosts and all the disks in the cluster. Why might you want to do this? Well, it’s very simple. As VMs are deployed on the VSAN datastore, there are algorithms in place to place those components across the cluster in a balanced fashion. But what if a hosts was placed into maintenance mode, and you requested that the data on the host be evacuated prior to entering maintenance mode, and now you are bringing this node back into the cluster after maintenance? What about adding new disks or disk groups to an existing node in the cluster (scaling up)? What if you are introducing a new node to the cluster (scaling out)? The idea behind proactive re-balance is to allow VSAN to start consuming these newly introduced resources sooner rather than later.

In 6.0, the proactive re-balance option is currently only available via an RVC command called vsan.proactive_rebalance. Proactive re-balance can be started and stopped via RVC. When you start proactive re-balance, you may also specify thresholds, how long the re-balance operation should run for, and how much data should be moved per hour for each node in the VSAN cluster. If these are not specified, defaults are chosen. When the re-balancing is completed to your satisfaction, you can stop the proactive re-balance.

Here is some further information on the usage.

usage: proactive_rebalance [opts] cluster 
Configure proactive rebalance for Virtual SAN
   cluster: Path to ClusterComputeResource
                 --start, -s:   Start proactive rebalance
         --time-span, -t :   Determine how long this proactive 
                                rebalance lasts in seconds, only be 
                                valid when option 'start' is specified
--variance-threshold, -v :   Configure the threshold, that only if 
                                disk's used_capacity/disk_capacity 
                                exceeds this threshold, disk is 
                                qualified for proactive rebalance, only 
                                be valid when option 'start' is specified
    --time-threshold, -i :   Threshold in seconds, that only when
                                variance threshold continuously exceeds 
                                this threshold, corresponding disk will 
                                be involved to proactive rebalance, only 
                                be valid when option 'start' is specified
    --rate-threshold, -r :   Determine how many data in MB could be 
                                moved per hour for each node, only be 
                                valid when option 'start' is specified
                  --stop, -o:   Stop proactive rebalance 
                  --help, -h:   Show this message

Some clarity might be needed for the start parameter “–variance-threshold”. The description states “Configure the threshold, that only if disk’s used capacity divided by disk capacity exceeds this threshold… ” In fact, the trigger condition is only when the following calculation is greater than the <variance_threshold>:

(<used_capacity_of_this_disk> / <this_disk_capacity>) - 
(<used_capacity_of_least_full_VSAN_disk> / <least_full_disk_capacity>)

In other words, a disk is qualified for proactive re-balancing only if its fullness (used_capacity/disk_capacity) exceeds the fullness of the “least-full” disk in the Virtual SAN cluster by the threshold. The re-balancing process also needs to wait until the <time_threshold> is met under this situation, and then it will begin to try to re-balance.

> vsan.proactive_rebalance_info 0
Proactive rebalance is not running!
Max usage difference triggering rebalancing: 30.00%
Average disk usage: 5.00%
Maximum disk usage: 26.00% (21.00% above mean)
Imbalance index: 5.00%
No disk detected to be rebalanced
 
> vsan.proactive_rebalance -s 0
Proactive rebalance has been started!
 
> vsan.proactive_rebalance_info 0
Proactive rebalance start: 2014-12-11 14:13:10 UTC
Proactive rebalance stop: 2014-12-12 14:16:17 UTC
Max usage difference triggering rebalancing: 30.00%
Average disk usage: 5.00%
Maximum disk usage: 26.00% (21.00% above mean)
Imbalance index: 5.00%
No disk detected to be rebalanced
 
> vsan.proactive_rebalance -o 0
Proactive rebalance has been stopped!
>

So not much going on above as the maximum disk usage is only 26%, and there needs to be a difference of 30% before proactive re-balancing is initiated, but hopefully you get the idea. This is a nice new feature which will speed up the time to get a nicely balanced cluster after a maintenance period, or when new resources are added to the cluster.

However, like any re-balancing activity, the rebuilds that result from the moving of components to other disks and hosts may impact overall performance, so this should be considered a maintenance task. Ideally, you should run this out of hours. Rebuild activity can be monitored through another RVC command, vsan.resync_dashboard or through the UI. The latter is another enhancement to VSAN 6.0 where rebuild activity can now be monitored via the vSphere web client.

Finally, to check if the cluster is balanced, another RVC command (vsan.check_limits) will show the distribution of components across hosts and disks in the cluster.

> vsan.check_limits 0
2015-04-22 10:12:50 +0000: Querying limit stats from all hosts ...
2015-04-22 10:12:52 +0000: Fetching VSAN disk info  (may take a moment) ...
2015-04-22 10:12:53 +0000: Done fetching VSAN disk infos
+---------+------------------+------------------------------------------+
|Host     | RDT              | Disks                                    |
+---------+------------------+------------------------------------------+
|cs-ie-h03| Assocs: 26/45000 | Components: 10/9000                      |
|         | Sockets: 13/10000| naa.600508b1001c9b93053e6dc3ea9bf3ef: 20%|
|         | Clients: 0       | naa.600508b1001c9c8b5f6f0d7a2be44433: 0% |
|         | Owners: 5        | naa.600508b1001c1a7f310269ccd51a4e83: 12%|
|cs-ie-h01| Assocs: 20/45000 | Components: 19/9000                      |
|         | Sockets: 14/10000| naa.600508b1001cb683ff0e29252f9e6dcc: 0% |
|         | Clients: 0       | naa.600508b1001c79748e8465571b6f4a46: 7% |
|         | Owners: 0        | naa.600508b1001c23a8399f7e265077a602: 41%|
|cs-ie-h02| Assocs: 42/45000 | Components: 20/9000                      |
|         | Sockets: 19/10000| naa.600508b1001ca12eb15142cf602688a9: 21%|
|         | Clients: 0       | naa.600508b1001c62313d3c49ad8e90fbc8: 0% |
|         | Owners: 6        | naa.600508b1001c8da0d5a55ee769179e04: 28%|
|cs-ie-h04| Assocs: 44/45000 | Components: 11/9000                      |
|         | Sockets: 18/10000| naa.600508b1001c4b820b4d80f9f8acfa95: 7% |
|         | Clients: 3       | naa.600508b1001c6a664d5d576299cec941: 28%|
|         | Owners: 8        | naa.600508b1001c29d8145d6cc1925e9fb9: 0% |
+---------+------------------+------------------------------------------+
Exit mobile version