My first introduction to X-IO was via Stephen Foskett’s Tech Field Days. They piqued my interest and I added them to the list of storage vendors that I wanted to check out at VMworld 2014. I started to research these guys a little more, and learnt that they are closely related to Xiotech, a SAN company that I dealt with on occasion when I worked in technical support for VMware back in the day. It seems that Xiotech acquired Seagate’s spun-out Advanced Storage Group in 2007. The guys then began to work on a different product to the Xiotech team, namely the Intelligent Storage Element or ISE array. The Xiotech products were discontinued in 2012 (although the name continues to appear on the VMware SAN/Storage HCL), and the focus was placed on the ISE products. I was a bit confused when I saw that X-IO were not listed on the HCL directly, but after checking with Blair Parkhill, VP of Tech Marketing at X-IO, it seems that they still use their incorporated name, Xiotech.
As many of you are aware, I was at VMworld in San Francisco last week. I wrote a number of articles about some VMware storage announcements, such as EVO:RAIL, VAIO and VVols. However there were, as usual, quite a number of storage vendors at this years conference. One of the vendors that I really want to learn more about was Kaminario, an all flash array vendor that I’d heard a lot of things about. I had the pleasure of spending some time at the Kaminario booth with Shai Maskit who is a senior Product Manager with Kaminario. I posed my usual set of questions to learn a bit more about their AFA products. Continue reading
I’ve been doing a bit of work over the past number of weeks on the adapters for vCenter Operations (vC OPs) with my old pal Paudie. We are working on vCenter Operations 5.8 and using a vSphere 5.5U1 environment. Since we have a Brocade Fibre Channel switch and an EMC VNX array in our lab, I wanted to get the Management Pack for Storage Devices (MPSD) and the Brocade SAN Analytics Management Pack deployed, and see what information we could glean from those extension packs. When we completed the configuration, we were able to go into the vC OPs customs view and see details like the following Brocade – Health Overview and Storage Components Heatmap:
Caution: We spent a lot of time trying to figure out why the MPSD adapter would not connect to the CIMOM service on Brocade’s Network Advisor. This boiled down to networking/DNS configuration issues. The MPSD release notes for vC OPs describe the issue. As they say, I should have RTFM. Anyhow, here are the steps we went through to get this setup going. I’m afraid it is rather long, but hopefully you will find the information in here useful.
Pure Storage are all over the news at the moment. They just secured another round of funding (225 million to be precise), and are now valued at over 3 billion. You can read more about that here. However, even before this announcement, I had already arranged to have a catch up chat with Pure’s primary evangelist (and a good pal of mine), Vaughn Stewart. I was surprised to see that it had been 18 months since I last did a piece on Pure so I really did want to see what changes they had made in the meantime as there were a few vSphere interoperability pieces still to be completed when we last spoke.
There are many occasions where the information displayed in the vSphere client is not sufficient to display all relevant information about a particular storage device, or indeed to troubleshoot problems related to a storage device. The purpose of this post is to explain some of the most often used ESXCLI commands that I use when trying to determine storage device information, and to troubleshoot a particular device.
Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.
A short and sweet post today. In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, VMware supported these 16Gb HBAs running at 16Gb. However, an important point to note is that there was no support for full end-to-end 16Gb connectivity from host to array in vSphere 5.1. To get full bandwidth, you possible had to configure a number of 8Gb connections from the switch to the storage array.
With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan