Site icon

Getting started with HCIbench, the benchmark for hyper-converged infrastructure

This week I had the opportunity to roll-out the  HCIbench tool on one of my all-flash VSAN clusters (much kudos to my friends over at Micron for the loan of a bunch of flash devices for our lab). The HCIbench is a tool developed internally at VMware to make the deployment of a benchmark tool for hyper-converged infrastructure (HCI) systems quite simple. In particular, we wanted something that customers could use on Virtual SAN (VSAN). It’s an excellent tool for those of you looking to do a performance test on hyper-converged infrastructures, thus the name HCIbench.

Please note that this blog post is not about discussing the results, as these will vary from environment to environment due to the open nature of VSAN’s HCL. This blog is more of a primer to assist the reader in getting started with HCIbench.

Step 1 – Deploy the OVA

To get started, you deploy a single HCIbench appliance called Auto-Perf-Tool. There’s nothing special about the appliance itself. It comes as an OVA, and if you’ve deployed an OVA appliance before, then this is no different. You provide the usual information such as where to place the appliance. For those of you wishing to test VSAN performance, you’ll be deploying this appliance to a VSAN datastore most likely.

Step 2 – Point a browser at the appliance, and add vSphere environment info

The next step is to open the console and populate some information on the appliance, such as a root password and some network details. This is even easier if you use DHCP. When this information is provided, the appliance completes its boot process. At this point, you open a browser and point it to the IP Address of the appliance and port 8080, and you are now presented with a template/form to populate. The first section of the form looks for information such as the vCenter server and credentials, data center, cluster name, network, datastore, etc. Note that in the current version, the VM Network must be on a standard vSwitch. You cannot use a distributed switch (DVS) portgroup this time. The network defaults to “VM Network” and datastore defaults to “VSAN datastore” automatically if these are not provided:

[Update: 21-Oct-2015] Ensure that the Datastore Name field is populated in the most recent appliance. Although it is shown as not being required, in the latest release we support multi-datastores deployments so this field must be specified, even if it is the VSAN datatstore that is being tested. If you do not add this, the benchmark will fail with “A required parameter is NULL, please re-check your configuration file !”

Step 3 – Add host and benchmark VM info

The next section is about the hosts, and the VMs that are going to run the benchmark. You add a list of ESXi hosts (the hosts that are participating in the VSAN Cluster), one line at a time, and then supply information about the VM workload, including number of VMs you wish to deploy, number of disks and size of disks. In this example, I have 4 hosts so I will deploy 8 VMs, each with 4 disks, and each disk 10GB in size. These VMs will be distributed across all hosts in the cluster, leveraging the distributed nature of VSAN’s compute and storage.

Step 4 – Download and add vdbench zip file, and add parameter file

Once this is done, users need to provide access to the vdbench tool. Due to licensing issues, we are not allowed to distribute the vdbench benchmarking tool, so it needs to be downloaded from Oracle if you do not have it already. There is a link provided to the Oracle website to down the vdbench zip file, but you will need to have an account on Oracle’s site to access it. Once the vdbench zip file has been downloaded locally, you must then uploaded to the appliance. The next part of the setup is to generate a vdbench parameter file, which has information such as I/O size, R/W ratio and whether the I/O should be random or sequential in nature. You should also state how long you want the test to run (3600 seconds = 1 hour below), as well as whether you want to dd the storage first (initialize it). Finally, decide if you want the benchmark VMs cleaned up once the test completes. Save the configuration. To make sure that everything is OK, run the validate test. This will verify that all the configuration parameters are correct, and will state whether it is OK to start the test.

Step 5 – Monitoring the workloads

Click on the Test button to start the benchmark. The tool next deploys a bunch of VMs as per the configuration, each of which will run an instance of vdbench.

In my example, I had a 4 node cluster, and I selected 8 VMs to roll out. This will deploy 2 VMs per host in a distributed manner. In the screenshot to the left, you can see the original benchmark tool called Auto-Perf-Tool, and 8 additional VMs rolled out for the purpose of the test, each names vdbench–. Once the VMs have been rolled out, and are generating I/O, each of them can be examined for further information. For example, you can check to see that they have the appropriate number of disks as per the configuration, and have been deployed on the correct VM network. I also find it useful to select one of the VMs, open the Monitor > Performance view. In the Advanced view, I select the virtual disks and modify the “chart options” to select the read and write rates value. I can then see the amount of I/O that is in-flight from vdbench. In this particular set, I chose the reads and writes per second for each of the disks. This shows that vdbench is doing what it is supposed to do:

While the test is running, you will see the following displayed in the browser:

And when the test is complete, the following will be displayed:

Step 6 – Examine the results

You can now click on the results button, and navigate via the browser to where the results are stored. There is a text file for each VM which contains a lot of information regarding IOPS, Latency and Throughput information. Here is an example of such a results output taken from my environment:

However you can also navigate further along to what is essential a VSAN Observer collection. Click on the stats.html file to display a VSAN Observer view of the cluster for the period of time that the test was running:

Note: The current version of the HCIbench appliance needs to reach out to the internet in order to get various fonts and css files needed to render VSAN observer graphs. This same principle holds for VSAN Observer when run from vCenter server. If there is no path to the outside world, these VSAN Observer graphs captured by HCIbench will not render properly. In an upcoming HCIbench appliance, this requirement was addressed, and all of the necessary components to render the VSAN Observer graphs will be included with the appliance.


If things are not going right for some reason, there are 4 places to check.

Where do I get the bits?

Happy benchmarking!

Exit mobile version