A closer look at the Infinio Cache Accelerator I/O Filter

The folks over at Infinio were kind enough to send me their latest Cache Accelerator I/O Filter so I could set it up in my lab. I must say, this seemed to be the most intuitive of the VAIO plugins (vSphere API for I/O Filters) that I have used to date. In this post, I just want to run run through the deployment of the filter, as opposed to looking at any of the potential performance benefits. If you want an overview of VAIO, have a read of the write-up that I did from VMworld 2014 here. I’ve also looked at the Infinio Cache Accelerator 2.0 too in the past, but this was before it was available as an I/O Filter. So it is definitely time to revisit what they are doing in this space with their latest version, 3.1.3.

Initial Deploy

The initial deployment started with a Windows EXE. Once launched, it guides you through each of the steps to deploy the management portal onto your vSphere environment. The information needed is pretty standard. First, you provide vCenter credentials, and then you provide information such as which host to deploy to, which datastore, which network to use, Static or DHCP IP address, and then some credentials to use to login into the management portal once it is deployed. All going well, you should end up with something that looks like this:

Claim Host Resources for Cache Acceleration

The next step is obviously to open the Management Console, as highlighted above. On clicking the “Open Management Console” button so, you will be met with something similar to the following:

 Now you can immediately start to accelerate VMs at this point, but the only cache that can be setup during this workflow is using memory (unless I didn’t follow the workflow correctly, which is a distinct possibility). Anyway, what I wished to do is use some local SSDs as cache for accelerating my I/O. To do this, I must first claim these devices for Virtual Flash use in vSphere. Once the devices have been claimed for Virtual Flash, Infinio can then go ahead and use them for cache acceleration. To do this is quite straight forward. From the vSphere Web Client, navigate to Host > Configure > Virtual Flash, and claim the devices as shown below (these devices should not be used by anything else):

Now we can return to the Infinio Management Portal and setup the cache resources. You must choose some amount of system memory to use for cache acceleration (0.5GB minimum) and once that step is complete, you can go ahead and claim a Virtual Flash device for additional SSD based acceleration. That is what I did in this example. Initially only the memory was claimed for cache, but I could see that there were SSDs (solid state disks) that were also available for claiming as caching resources. The UI tells me that there is a 743.6GB device available on each hosts. All I need to do is to click Setup SSD on each line to claim it. All of the resources are used for read cache acceleration bu Infinio. There is no write cache acceleration at this time.

Once I have claimed the devices, I simply commit the transaction using the “Commit changes” button at the bottom of the window:

Policies

Now that we have resources claimed for cache acceleration, the VMs are accelerated by associating a “caching” storage policy with them. The policy for Infinio will now be visible via SPBM, Storage Policy Based Management. You can take two approaches to implementing your policies for accelerating VMs. You can do it via the Infinio Management Console or you can do it via the vSphere web client. If we look at the list of I/O Filters, we will see the Infinio policy. Since this is 6.5, we will also see the vSphere VM Encryption and Storage I/O Control v2 I/O Filters. You can learn more about these in the Core Storage White Paper here.

And if we look at the VM Storage Policies, we can now see that there is a “caching” common rule available alongside Encryption and Storage I/O Control:

Now you can go ahead and built more rules alongside this caching policy, whether that includes common rules for encryption and/or SIOC, or if it includes additional rule-sets for your storage. OK, we have looked at creating the policies via the vSphere client, but you can also do the same thing via the Infinio Management Console. Select Configuration > Storage Policies > + Setup Policy. From here, you have the option of adding Infinio cache acceleration to an existing policy, or make a copy of an existing policy, and add cache acceleration to that, or to create a brand new policy with this I/O Filter:

Note that the final option only lets you create a policy that has the I/O Filter. It doesn’t allow you to create a new storage policy, and then append the Infinio cache acceleration I/O Filter. You would have to do this from vSphere.

Accelerate VMs

So lets assume now that you have the appropriate policies built. The final step is to select which VMs to accelerate. That is done by clicking on the “Accelerate VMs” button in the top right hand corner of the management console. When one or more VMs are selected, you are given an option of which policy to choose. This policy, which includes cache acceleration from Infinio and whatever else you’ve decided to include (common rules, rule-sets), is applied to the VM. The management console also provide a performance view so that you can see how many of the reads are being satisfied by cache, as well as total number of reads and response times. In the screenshot below, I accelerate 2 x VMs that were both running IOmeter with a 100% read workload. The VM win-2012-2 was running for a while, bit win-2012-1 had only just started (these are shown for illustrative purposes and are not meant to highlight any sort of hero numbers on performance):

Summary

In all honesty, as I mentioned earlier, this was one of the most intuitive products to install that I have come across in a long time. Very simple to setup as well, if you have a little appreciation of storage policies, which those of you who have worked with Virtual Volumes or VSAN will have done. And it just works. However, note that this does not require VSAN or VVOLs. This can be used with traditional storage as well, in exactly the same was that vSphere VM Encryption and SIOC v2 can be used. Nice job Infinio!

5 Replies to “A closer look at the Infinio Cache Accelerator I/O Filter”

  1. Thanks Cormac – we are glad that you had such a good experience with our installation and integration with VAIO. Our developers worked hard to make it seamless for VMware users, both those who have adopted Storage Policies, and those who haven’t yet.

    If any of your readers would like to try Infinio for themselves and see what kind of performance we can deliver (spoiler alert: 80 microsecond response time, 1M IOPS), they can request a trial at http://www.infinio.com/free-trial.

    1. Sheryl, can you indicate if write based acceleration (write through or write backed) is on your roadmap?

  2. Sorry if it’s a silly question but how does this differ from the regular Flash Read Cache and how does it perform?

    1. One part would be the ability to use memory for caching – vFRC doesn’t do this.
      The other part would be SPBM – the ability to consume caching via policies. With VFRC, you had to edit each VM that wanted some cache resources.

Comments are closed.