QLogic Mt. Rainier Technology Preview
I was fortunate enough yesterday to get an introduction to QLogic’s new Mt. Rainier technology. Although Mt. Rainier allows for different configurations of SSD/Flash to be used, the one that caught my eye was the QLogic QLE10000 Series SSD HBAs. These have not started to ship as yet, but considering that the announcement was last September, one suspects that GA is not far off. As the name suggests, this is a PCIe Flash card, but QLogic have one added advantage – the flash is combined with the Host Bus Adapter, meaning that you get your storage connectivity and cache accelerator on a single PCIe card. This is a considerable advantage over many of the other PCI cache accelerators on the market at the moment, since these still require a HBA for SAN connectivity as well as a slot for the accelerator.
The QLE10000 contains a dual port 8Gb HBA with on-board SLC (single-level cell) flash (either 200GB or 400GB) which QLogic claim can deliver 310,000 IOPs. One of the major advantages, apart from the combined connectivity and acceleration, is that configuration is done in much the same way as one does HBA configurations currently – if you understand how to configure and manage QLogic’s current HBAs, then no additional smarts are required to deploy the new cache accelerator. There are no additional drivers required either, making the deployment very simplified. There is only the one driver for the HBA and cache accelerator components.
The caching is done on a per LUN basis. There is no virtual machine (VM) granularity at this time, but VMs residing on a LUN which is cached automatically benefit from having cache in their I/O path.
The cache is current write-thru (read) cache. Although write-back cache is one of the goals for QLogic, it will not be available in this release. Of course, write-back is always harder as you have to ensure that the cache is mirrored so that there is no data corruption in the event of a host failure.
One of the other nice features is cache coherency. For every LUN, there is one owner of the LUN cache. A cluster of accelerator cards are used, but only one accelerator owns the LUN cache. Each card knows who every LUN cache owner is, so requests are routed to that cache owner. If a cache miss occurs, the LUN cache owner requests data from LUN, places the data in cache and returns data to requesting host. A cluster of Mt. Rainier PCIe cards involve putting all the initiators into a single zone (multi-initiator zone) on the switch. If I/O from one adapter is requested from a particular LUN whose cache is owned by another adapter, the I/O request can be directed to that adapter.
This allows QLogic to support feature such as vMotion and DRS and keep it’s cache pool hot, because once the VM has been migrated to a new host, it can still access the cache on the adapter back on the original host. In the event of an adapter failure, another adapter can take over cache ownership for any LUNs that were being cached by the failed adapter.
There is a YouTube video demonstration here if you want some further details. There is also more information on the QLogic web site here. My understanding is that QLogic will be doing some live demonstrations of their Mount Rainier technology at VMware Partner Exchange (PEX) 2013 next month in Las Vegas. Definitely worth checking out.
You can read more about flash and caching technologies in this post.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
4 Replies to “QLogic Mt. Rainier Technology Preview”
Comments are closed.