I’ve been having lots of fun lately in my new role in Integration Engineering. It is also good to have someone local once again to bounce ideas off. Right now, that person is Paudie O’Riordan (although sometimes I bet he wishes I was in a different timezone ). One of the things we are currently looking at is a VSAN implementation using Fusion-io ioDrive2 cards (which our friends over at Fusion-io kindly lent us). The purpose of this post is to show the steps involved in configuring these cards on ESXi and adding them as nodes to a VSAN cluster. However, even though I am posting about it, Paudie did most of the work, so please consider following him on twitter as he’s got a lot of good vSphere/Storage knowledge to share.
I was going to make this part 11 of my vSphere 5.5 Storage Enhancements series, but I thought that since this is such a major enhancement to storage in vSphere 5.5, I’d put a little more focus on it. vFRC, short for vSphere Flash Read Cache, is a mechanism whereby the read operations of your virtual machine are accelerated by using an SSD or a PCIe flash device to cache the disk blocks of the application running in the Guest OS of your virtual machine. Now, rather than going to magnetic disk to read a block of data, the data can be retrieved from a flash cache layer to improve performance and lower latency. This is commonly known as write-through cache, as opposed to write-back cache, where the write operation is acknowledged when the block of data enters the cache layer.
This is an interesting announcement for those of you following emerging storage technologies. We’ve been talking about flash technologies for some time now, but for the most part flash has been either an SSD or PCIe device. Well, we now have another format – DIMM-based flash storage device. And VMware now supports it.
Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.
All Flash Arrays continue to make the news. Whether it is EMC’s XtremIO launch or Violin Memory’s current market woes, there is no doubt that AFAs continue to generate a lot of interest. Those of you interested in flash storage will not need an introduction to SolidFire. These guys were founded by Dave Wright (ex-RackSpace) and have been around since 2009. I have been trying to catch up with SolidFire for sometime as I’d heard their pitch around Quality of Service on a per volume basis and wanted to learn more, especially how it integrated with vSphere features. Recently I caught up with Dave Cahill and Adam Carter of SolidFire to have a chat about SolidFire in general and what the VMware integration points are.
Before I left for PTO, I wrote an article on a number of different storage vendors you should be checking out at this year’s VMworld 2013. One of these was a new start-up called PernixData. With tongue firmly in cheek, I suggested that PernixData might use VMworld as a launchpad of their FVP (Flash Virtual Platform) product. Well, needless to say, my good friend Satyam Vaghani, CTO at PernixData, reached out to me to say that they were in fact announcing FVP before VMworld. He shared some details with me, which I can now share with you if you haven’t heard about the announcement.
I was fortunate enough yesterday to get an introduction to QLogic’s new Mt. Rainier technology. Although Mt. Rainier allows for different configurations of SSD/Flash to be used, the one that caught my eye was the QLogic QLE10000 Series SSD HBAs. These have not started to ship as yet, but considering that the announcement was last September, one suspects that GA is not far off. As the name suggests, this is a PCIe Flash card, but QLogic have one added advantage – the flash is combined with the Host Bus Adapter, meaning that you get your storage connectivity and cache accelerator on a single PCIe card. This is a considerable advantage over many of the other PCI cache accelerators on the market at the moment, since these still require a HBA for SAN connectivity as well as a slot for the accelerator.