Getting started with Fusion-io and VSAN
I’ve been having lots of fun lately in my new role in Integration Engineering. It is also good to have someone local once again to bounce ideas off. Right now, that person is Paudie O’Riordan (although sometimes I bet he wishes I was in a different timezone 🙂 ). One of the things we are currently looking at is a VSAN implementation using Fusion-io ioDrive2 cards (which our friends over at Fusion-io kindly lent us). The purpose of this post is to show the steps involved in configuring these cards on ESXi and adding them as nodes to a VSAN cluster. However, even though I am posting about it, Paudie did most of the work, so please consider following him on twitter as he’s got a lot of good vSphere/Storage knowledge to share.
Step 1. Install the Fusion-io PCI-E devices
After installing the cards, make sure that the ESXi host can recognize them. Use the ESXi shell command lspci -v to check (you can click on the images to make them larger).
Take note of the DID/SVID/SSID of the controller as this will be needed when sourcing the driver for the card next. In this case, it is 1aed:2001.
Step 2. Download the appropriate driver
Now, the VCG/HCL for VSAN is still a work in progress. Therefore, at the moment, you won’t see the driver for the Fusion-io ioDrive2 listed there. You have to search for the driver via the I/O Devices section of the HCL right now. The reason you need a driver is because ESXi does not ship with a Fusion-io driver for these cards. By searching the I/O Device section, and selecting Fusion-io and type SCSI, you will find the driver:
The VCG/HCL displays enough information to verify the adapter type from the lspci output captured previously. Compare the DID/SVID/SSID listed and ensure that this is the correct driver for your controller.
Step 3. Install the driver
In this example, the driver is on an NFS share which is mounted to the ESXi host. Next use the esxcli software command to install the driver:
Step 4. Reboot, and verify driver installed successfully
As the driver installation reported, you must reboot the host to make changes effective. After rebooting, you will be able to see if the driver got applied to the system via the esxcli software vib get -n scsi-iomemory-vsl command:
The driver installation looks good.
Step 5. Check firmware version, and update if necessary
From the HCL/VCG, you can also see that there was a requirement on the firmware version. In this example, we required firmware version 110356. A command, /bin/fio-update-iodrive, installed as part of the driver on the ESXi host is used to update the firmware on the ioDrive2 cards:
In this case, the firmware is already up to date on the card. If firmware had to be applied to the card, a reboot of the ESXi host would once again be required. You can now use other Fusion-io utilities, such as /bin/fio-status, to determine the status of the device:
Step 6. Check ESXi recognizes the adapter
At this stage, ESXi should be able to see the adapter. Let’s look at the output of a couple of commands, namely esxcli storage core adapter list and esxcli storage core device list:
The adapter is visible and the device is available.
Now you are good to go. You can now build your VSAN disk groups using Fusion-io IODrives as your flash device, and create your VSAN datastore for virtual machine deployments.
Great post. I did want to make a quick comment on the fist section regarding about ESXi not shipping with a driver. 5.5 Rollup 1 actually does have our drivers and utilities pre-installed, so there’s not need to do any of that. It’s always a good idea to check the firmware, but you could pretty much skip to Section 6.
You are correct. The ESXi 5.5.0 Driver Rollup 1 contains the driver. However this is designed for fresh green field installations of ESXi hosts. To apply the driver to existing ESXi hosts, you would have to go the procedure highlighted here.