A closer look at Fusion-io ioControl 3.0

fusion-io_logo-300x133Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.

Architecture

To begin with, we discussed the architecture of the ioControl array. ioControl 3.0 launched in Q4 2013. It is a hybrid array, using a combination of flash devices and spinning disks (ala Nimble Storage and Tintri). As you might imagine, the flash layer in the ioControl Hybrid Storage array is provided by Fusion-io ioMemory/ioDrive II PCIe cards. The advantage of this approach is that it places the flash closer to the CPU than SSDs, resulting in lower latencies. Also it means that it does not sacrifice any disk drive slots in the drive chassis, giving you maximum storage capacity. The array itself has a dual controller configuration, each controller (or service processor) having its own ioDrive. This provides full redundancy in the case of a failure. All writes are mirrored between the flash devices, and as the data ages, it gets moved to the disk drive layer. This layer is protected by a RAID-6 configuration, meaning it can tolerate multiple drive failures.

The ioControl array can be scaled out, as a second Fusion-io ioDrive can be added to each controller for additional performance. For capacity, the array can scale up to 192TB of raw storage by adding up to 3 external chassis of disk drives.

The array only supports the iSCSI protocol at this time. The guys said that they have a number of requests for Fiber Channel support and are actively investigating this.

Management of the ioControl array is done via the ioControl dashboard. The ioControl dashboard gives visibility into throughput, IOPS and Response Times, among other things. Its got a pretty nice look and feel.

ioControl Dashboard

Dynamic Data Placement and Intelligent Caching

This is the “secret sauce” of the array. Dynamic Data Placement provides auto-tiering between flash and spinning disk, moving hot blocks of data up into the flash layer, and moving cold blocks of data (those which haven’t been accessed in some time) down into the spinning disk layer. This is complemented by their Intelligent Caching mechanism which only brings up blocks which are deemed critical to performance into cache from the disk layer. It does not bring up non-critical data blocks into cache, meaning more of the cache is dedicated to important data.

Quality of Service

Personally I think this is going to be a significant feature for all storage arrays going forward. QoS is the ability to guarantee the performance attributes of a LUN or volume. It means that a particular volume is capable of performing so many IOPS at a particular a level of latency. The nice thing that Fusion-io have done is that they have integrated this feature through vCenter, so that when you create a datastore through the vSphere client, you can also select an ioControl storage policy to provide QoS for the datastore (this is not to be confused with VMware’s Profile Drive Storage/Storage Policies, which are something else entirely). There are 3 ioControl storage policies pre-defined, Mission Critical, Business Critical & non critical, which can be used to specify a performance capability of a particular LUN/datastore.

The other nice thing about the QoS feature is that it avoids the noisy neighbor problem from one datastore spilling over onto another datastore, basically isolating LUNs and guaranteeing that a problem on one LUN won’t affect the performance of another. Noisy neighbor problems within the same datastore can still be addressed by VMware Storage I/O Control feature.

As mentioned, QoS comes with a number of pre-configured policies to cover different workloads such as Mission Critical, Business Critical & non critical. And in the event of a failure (such as a controller failure (and thus half of the array’s processing power) QoS is maintained and  the non-critical datastores take the brunt of the outage. QoS policies can also be changed on the fly to meet the needs of changing workloads.

Pluggable Storage Architecture

The ioControl array is an active/active storage array which supports ALUA. There are no PSA plugins from Fusion-io for this array. The recommendation is to use VMware’s Round Robin path policy – this is what Fusion-io recommend to their existing ioControl customers.

VAAI Support

At this point in time, there are no VAAI (vStorage API for Array Integration) primitives supported on the array. However the full set of block primitives should be supported by mid-2014. This includes ATS, Zero, XCOPY & the Thin Provisioning primitives.

Replication/SRM Integration

My next question was around replication and of course, Site Recovery Manager interoperability. ioControl 3.0 has native replication capabilities but there is no SRM support at present but again, like VAAI integration, the Fusion-io team are developing their own Storage Replication Adapter and are looking to have this done by mid-2014. Of course, vSphere Replication is another option.

vSphere Web Client plugin

There is already a vSphere 5.1 C# client plugin for managing many of the workflows on the ioControl array. The guys informed me that they are working on a vSphere 5.5 web client plugin and this should be coming later this year. This plugin will support both vSphere 5.1 and vSphere 5.5.

vSphere PluginSome of the major characteristics of the plugin is that it can:

  • manage multiple ioControl storage systems from single plugin
  • modify the ioControl storage policies associated with a LUN/datastore
  • provision new storage for host/cluster. This workflow incorporates best practices from both Fusion-io and VMware (all steps such as creating iSCSI initiator, send targets, etc, are included in the wizard)

Dedupe/Compression

There is a built in data reduction – pattern matching mechanism on the ioControl array. If a block matches an existing pattern, then the block is already on disk so this new block is stored as metadata rather than additional data blocks. This is very beneficial for eager zeroed thick (EZT) disks as the EZT process can detect that a zeroed block already exists and thus additional zeroed blocks are stored as metadata. This reduction in capacity usage means considerable savings obviously. There is also a big performance boost when it comes to creating EZT disks as for the most part it is only metadata updates to quickly create a VMDK.

Snapshots

The arrays supports snapshots, 256 snaps per volume. Of course this can lead to a lot of objects on the array. One of the nice features of the UI is that it allows searching of volumes or snapshots as an easy way to manage very many objects.

Virtual Volumes

Which leads us nicely to one of the future VMware projects – Virtual Volumes or VVols. We had a brief chat about this upcoming feature, and Fusion-io are involved in the project as it complements their QoS mechanism very nicely. VVols, for those who don’t know, is a way of making virtual machine disks (VMDKs) first class citizens in the storage world. This will allow us to scale out our current storage offerings as well as given much greater granular control over the VMDKs from the point of view of snapshots and replication. I did a tech preview of VVols here. Remember – VVols is not yet a shipping product/feature. However many storage vendors are working with us on this project. If Fusion-io can tie their ioControl QoS feature directly to a VMDK via VVols, meaning they can guarantee QoS for a particular VMDK thru VVols, then this will indeed be a very nice feature to have. Effectively, this would tie VMware’s VM Storage Policies feature to ioControl’s policy feature for QoS – how cool would that be? We’ll have to wait and see.

If you wish to learn more about the array, you can read the ioControl data sheet or the architecture white paper. There is also a demonstration video on using the ioControl User Interface here:


Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan