A closer look at X-IO

xio2My first introduction to X-IO was via Stephen Foskett’s Tech Field Days. They piqued my interest and I added them to the list of storage vendors that I wanted to check out at VMworld 2014. I started to research these guys a little more, and learnt that they are closely related to Xiotech, a SAN company that I dealt with on occasion when I worked in technical support for VMware back in the day. It seems that Xiotech acquired Seagate’s spun-out Advanced Storage Group in 2007. The guys then began to work on a different product to the Xiotech team, namely the Intelligent Storage Element or ISE array. The Xiotech products were discontinued in 2012 (although the name continues to appear on the VMware SAN/Storage HCL), and the focus was placed on the ISE products. I was a bit confused when I saw that X-IO were not listed on the HCL directly, but after checking with Blair Parkhill, VP of Tech Marketing at X-IO, it seems that they still use their incorporated name, Xiotech.

My primary interest was in learning about some of the error/failure handling techniques alluded to during tech field day. When I visited their booth, I was introduced to Richard Lary. Richard has previously held very senior roles in DEC & subsequently Compaq, where he was involved in the first release of the EVA storage array product line as well as being a technical adviser to a number of storage start-ups. He is now a corporate fellow with X-IO. Richard described to me some of the differentiators that an X-IO array has over the competition.

X-IO Features

The X-IO ISE (Intelligent Storage Element) Storage System is a dual controller array which can be configured with all HDD (ISE 200) or as a hybrid of HDD:SSD (ISE 700). The controllers work in an active/active configuration, which means that they can be hot-upgraded without taking the array offline.

The hybrid array uses a mix of HDD & SSD to provide an auto-tiering solution; the SSD is not used as a cache. This means that hot blocks are moved up into the SSD layer, whilst older, cold data is moved down to the HDD layer.

Richard stated that one of the primary goals of X-IO is reliability. He stated that in his previous experience in the storage industry, when disk drives were replaced at a customer site and sent back in-house for repair, in the majority of cases, no fault was found. One of the first things that X-IO do is that they ship their drives with a stabilizing packaging framework, called datapacs. This mitigates the amount of vibration, a common cause of disk drives issues.

W-Datapac-Open3-300x2002Perhaps even more importantly, X-IO have the ability to try very clever stuff to address other disk drive problems. For example, they can spin a drive down and up to see if it resolved an intermittent issue, they can power a drive off for a period of time, and power it back on again to see if an issue goes away, and they can even reformat a drive completely should a problem occur. Even better, X-IO have the smarts to offline a single platter or heads on a disk drive if the drive is only complaining about that platter/head – the rest of the drive can remain online.

Protocols

Each ISE can support up to four x 8 Gbps Fibre Channel connections per controller and two x 10/40 Gbps Ethernet connections per controller for iSCSI, the former being their 2nd generation controller design and the latter being their 3rd generation controller design.

vSphere Interop

So just to be clear, X-IO are using their incorporated name, Xiotech, on the VMware HCL. There are three generations of array listed, including the FC & iSCSI controller deigns. Both the hyrid series and all HDD series are supported. Speaking to Richard about some of the interoperability, he told me that they had implemented the ATS, Clone and Zero VAAI primitives, but that they did not have thin provisioning/UNMAP yet. The main reason for this is that the ISE does not yet provide thin provisioning at the array level, but they are working on it.

Blair also mentioned that they are wrapping up VASA (vSphere APIs for Storage Awareness). This will surface up capabilities to the storage array and allow vSphere  and X-IO customers to utilize VM Storage Profiles for the correct selection and ongoing monitoring of VM storage placement.

There is also a web client plugin for vSphere that allows a vSphere administrator to manage the ISE storage. It gives details about the performance of the ISE, including IOPS, throughput and latency. Conversely, there is also a VMware ecosystem integration plugin for the ISE Manager which allows a storage administrator gain insight into the vSphere environment, allow a storage admin to performance certain storage related vSphere tasks – nice touch.

xio-client-pluginA Management Pack for vCenter Operations Manager integration exists as well. This will provide X-IO specific dashboards for ISE health overview and performance metrics.

They are also part of the RDP (Rapid Desktop Program) for VDI with their X-Pod reference architecture. The whole point of this program, as its name implies, is to enable the rapid deployment of VDI desktops on a known/proven configuration with pre-installed & pre-configured software, especially for the initial proof-of-concept (POC) stage. You can find out more about the RDP program here.

A full list of VMware integration features can be found on the X-IO site by clicking here.

Data Services

There is very little in the way of data services on the ISE array right now. As already mentioned, there is no thin provisioning. But neither is there any snapshots, replication, deduplication, compression or encryption to speak of. X-IO stated that their focus right now is reliability and performance, and they feel that this is what most customers want from their storage. They did however state that two ISE arrays could present LUNs with the same UUID which would then be collapsed down to a single device, with I/Os going to both arrays, and that perhaps this could be used as a replicating DR config. I’m not sure I’d be comfortable recommending that in vSphere environments; I’d be more inclined to recommend a software solution like vSphere Replication.

Summary

My one concern is the current lack of data services (snapshots, replication, deduplication, etc, etc). However X-IO have certainly done a lot of work to integrate with vSphere, and their focus on reliability is to be commended. My understanding is that a number of data services are already in the works, such as thin provisioning. In the meantime, data services such as snapshots, cloning and replication can be leveraged at the vSphere layer. If you are looking for a storage system with a bunch of vSphere integration features, that is focused on reliability and performance, then X-IO is certainly worth investigating. X-IO will be exhibiting at VMworld 2014 in Barcelona next month. They are in booth 158 and I am sure they would be happy to discuss their ISE array and features in more detail.

One Reply to “A closer look at X-IO”

Comments are closed.