Before I left for PTO, I wrote an article on a number of different storage vendors you should be checking out at this year’s VMworld 2013. One of these was a new start-up called PernixData. With tongue firmly in cheek, I suggested that PernixData might use VMworld as a launchpad of their FVP (Flash Virtual Platform) product. Well, needless to say, my good friend Satyam Vaghani, CTO at PernixData, reached out to me to say that they were in fact announcing FVP before VMworld. He shared some details with me, which I can now share with you if you haven’t heard about the announcement.
I was fortunate enough yesterday to get an introduction to QLogic’s new Mt. Rainier technology. Although Mt. Rainier allows for different configurations of SSD/Flash to be used, the one that caught my eye was the QLogic QLE10000 Series SSD HBAs. These have not started to ship as yet, but considering that the announcement was last September, one suspects that GA is not far off. As the name suggests, this is a PCIe Flash card, but QLogic have one added advantage – the flash is combined with the Host Bus Adapter, meaning that you get your storage connectivity and cache accelerator on a single PCIe card. This is a considerable advantage over many of the other PCI cache accelerators on the market at the moment, since these still require a HBA for SAN connectivity as well as a slot for the accelerator.
As many of you are aware, VMware made a number of announcements at VMworld 2012. There were three technical previews in the storage space. The first of these was on Virtual Volumes (vVOLS), which is aimed at making storage objects in virtual infrastructures more granular. The second was Distributed Storage, a new distributed datastore using local ESXi storage. The final one was Virtual Flash (vFlash). However, rather than diving into vFlash, I thought it might be more useful to take a step back and have a look at flash technologies in general.
Last week, I presented at the UK National VMUG. I took the opportunity to catch up with Darren Williams (Technical Director, EMEA & APAC) of WHIPTAIL who was also presenting at the event. My first introduction to WHIPTAIL came last year when I first met Darren at another user group meeting, and I posted about their XLR8R array on the vSphere storage blog. Darren & I discussed the changes which WHIPTAIL has undergone in the past 12 months since we last spoke, including the launch of a new range of scale out storage arrays, as well as the new features in WHIPTAIL’s soon the be released 4.1.1 update.
At VMworld 2012 in San Francisco, I had the pleasure of catching up with Scott Kline, Karthik Pinnamaneni & the rest of the team from Nimbus Data. In the weeks leading up to VMworld I read quite a bit about Nimbus Data’s new Gemini Flash Array, but my primary interest was to figure out what integration points existed with vSphere.
Let’s start with a look at the Gemini Flash Array. The first thing that jumps out is that there is multiple protocols supported for both SAN & NAS. The array supports Fibre Channel, iSCSI, NFS, SMB and Infiniband protocols. There is no FCoE support at this time, and when I asked the guys why, they said that this is simply due to lack of demand. There is nothing that would prevent them implementing FCoE if there was sufficient demand for it, which they are not seeing right now.
An interesting fact is that Nimbus Data manufacture their own proprietary solid state drives. They purchase the NAND and build the drives themselves. There is a reason for this. One point that Scott and Karthik made to me was that many scale out storage offerings do not scale out their cache with their arrays. This then becomes the bottleneck. Nimbus Data address this by placing cache on each of their drives so as the storage scales out, so does the cache. They refer to this as their Distributed Cache Architecture (DCA).
The ‘secret-sauce’ at the heart of the Nimbus array is the HALO operating system. It provides administration, data protection, optimization, security, and monitoring of Nimbus Data arrays. The Nimbus Data array presents a single SSD device back to the ESXi host(s), either via a block protocol or NFS. Nimbus Data claim that their newer Gemini model can achieve 1.2 million IOPS in a 2U box. This is a latency of only 100 microseconds. Yes, that is 0.1 millisecond latency. The I/O block size used to achieve this figure was 4K, with 80% read & 20% write. They were also able to sustain a 12GB throughput with a 256K block size.
One of the concerns many people have with flash is the lifespan. Nimbus Data are offering 10 year endurance with their drives. There are a number of thing they do to mitigate the wear out of their drives. One thing they do is cache the writes in DRAM. Once there is a full 64KB of writes in the cache, they do a full page write to Flash. Nimbus Data also have an algorithm which chooses between the individual flash cells. Each of the cells are rated, and the algorithm will choose the cells which have a higher rating over cells with a lower rating. All of these contribute to the MLC (Multi Level Cell) flash drives lasting the guaranteed 10 years. In fact, Scott told me that 2 years ago they deployed Nimbus Data Flash Arrays at eBay and the flash drives in these arrays have not yet reached 10% usage.
Nimbus Data currently support all three VAAI Block Primitives – ATS (Atomic Test & Set), Write Same (Zero) and XCOPY (Clone). They are working on VAAI-NAS primitives but these are not available yet. The driving factor here of course is the VCAI offload – the ability to offload linked clones to the storage array for View Desktop deployments.
Scott also told me that they are working on a management plugin for the new vSphere 5.1 web based client, but it wasn’t available for VMworld 2012. Right now the management is done by an external web based management tool. However I am led to believe that Nimbus Data will have a vCenter plugin for their management tool sometime in Q4 2012.
Business Continuance/Disaster Recovery
The Gemini array is designed to be Fault Tolerant and replication can be configured in either synchronous or asynchronous mode. Snapshots and replication currently work at the volume level. There is no integration with VMware Site Recovery Manager at this time. This is something Nimbus Data are hoping to have in place in the first half of 2013.
Overall, this is an amazing piece of technology. I would like to see even more integration with vSphere products and features going forward, as I personally think that this is a major differentiation factor in the storage market. Still, over 1 million IOPs in a 2U box – impressive stuff.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage