Site icon CormacHogan.com

Nimbus Data’s new Gemini Array & vSphere Integration

At VMworld 2012 in San Francisco, I had the pleasure of catching up with Scott Kline, Karthik Pinnamaneni & the rest of the team from Nimbus Data. In the weeks leading up to VMworld I read quite a bit about Nimbus Data’s new Gemini Flash Array, but my primary interest was to figure out what integration points existed with vSphere.

Gemini Array

Let’s start with a look at the Gemini Flash Array. The first thing that jumps out is that there is multiple protocols supported for both SAN & NAS. The array supports Fibre Channel, iSCSI, NFS, SMB and Infiniband protocols. There is no FCoE support at this time, and when I asked the guys why, they said that this is simply due to lack of demand. There is nothing that would prevent them implementing FCoE if there was sufficient demand for it, which they are not seeing right now.

An interesting fact is that Nimbus Data manufacture their own proprietary solid state drives. They purchase the NAND and build the drives themselves. There is a reason for this. One point that Scott and Karthik made to me was that many scale out storage offerings do not scale out their cache with their arrays. This then becomes the bottleneck. Nimbus Data address this by placing cache on each of their drives so as the storage scales out, so does the cache. They refer to this as their Distributed Cache Architecture (DCA).

The ‘secret-sauce’ at the heart of the Nimbus array is the HALO operating system. It provides administration, data protection, optimization, security, and monitoring of Nimbus Data arrays. The Nimbus Data array presents a single SSD device back to the ESXi host(s), either via a block protocol or NFS. Nimbus Data claim that their newer Gemini model can achieve 1.2 million IOPS in a 2U box. This is a latency of only 100 microseconds. Yes, that is 0.1 millisecond latency. The I/O block size used to achieve this figure was 4K, with 80% read & 20% write. They were also able to sustain a 12GB throughput with a 256K block size.

Flash Longetivity

One of the concerns many people have with flash is the lifespan. Nimbus Data are offering 10 year endurance with their drives. There are a number of thing they do to mitigate the wear out of their drives. One thing they do is cache the writes in DRAM. Once there is a full 64KB of writes in the cache, they do a full page write to Flash. Nimbus Data also have an algorithm which chooses between the individual flash cells. Each of the cells are rated, and the algorithm will choose the cells which have a higher rating over cells with a lower rating. All of these contribute to the MLC (Multi Level Cell) flash drives lasting the guaranteed 10 years. In fact, Scott told me that 2 years ago they deployed Nimbus Data Flash Arrays at eBay and the flash drives in these arrays have not yet reached 10% usage.

VAAI Integration

Nimbus Data currently support all three VAAI Block Primitives – ATS (Atomic Test & Set), Write Same (Zero) and XCOPY (Clone). They are working on VAAI-NAS primitives but these are not available yet. The driving factor here of course is the VCAI offload – the ability to offload linked clones to the storage array for View Desktop deployments.

Scott also told me that they are working on a management plugin for the new vSphere 5.1 web based client, but it wasn’t available for VMworld 2012. Right now the management is done by an external web based management tool. However I am led to believe that Nimbus Data will have a vCenter plugin for their management tool sometime in Q4 2012.

Business Continuance/Disaster Recovery

The Gemini array is designed to be Fault Tolerant and replication can be configured in either synchronous or asynchronous mode. Snapshots and replication currently work at the volume level. There is no integration with VMware Site Recovery Manager at this time. This is something Nimbus Data are hoping to have in place in the first half of 2013.

Overall, this is an amazing piece of technology. I would like to see even more integration with vSphere products and features going forward, as I personally think that this is a major differentiation factor in the storage market. Still, over 1 million IOPs in a 2U box – impressive stuff.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

Exit mobile version