Nimble Storage revisited – a chat with Wen Yu

nimble-logoNimble Storage are another company who have been making a lot of waves in the world of storage in recent years. Based in San Jose, CA, they IPO’ed earlier this year, and have something in the region of 600 employees worldwide at the present. I caught up with Wen Yu, who I have known from my early days at VMware where we worked together in the support organization. Wen moved over to Nimble a couple of years back and now is a technical evangelist at Nimble. Actually, Nimble were the subject of the very first post on this blog site when I launched it almost 2 years ago. At the time I wrote about some significant architectural updates in their 2.0 release. My understanding is that their next major release (2.1) is just around the corner. So this was a good time to chat with Wen about some new features and other things happening in the Nimble world.

 Architecture Overview

Nimble essentially ship two product lines, the CS200 & the CS400 series of storage arrays. These arrays have different configurations based on number and type of access ports, flash capacity  and disk capacity. All Nimble Storage arrays support the iSCSI protocol only at this time. Nimble Storage is what can be termed a hybrid storage array, where a combination of flash technology and spinning disk are used to provide improved performance but continuing to use the cheaper magnetic disks for persistent store.

CASL – Cache Accelerated Sequential Layout

An integral part of all Nimble Storage arrays is CASL. CASL is the ‘secret sauce’ which aggregates random writes from lots of different applications and virtual machines to the array into a large stripe of data before committing it to disk. This essentially converts the many random writes (I/O blender) into a large sequential write operation. The benefit of this approach is two-fold, first it improves the write performance and secondly it increases the lifespan of the Nimble array’s MLC cache. For read operations, a flash cache is used to keep a copy of the ‘hot’ data for improved read performance.

InfoSight

Every Nimble Storage array comes with a feature called InfoSight. InfoSight is a cloud-based management tool for the Nimble array. It uses data analytics to provide an administrator with lots of details regarding the arrays performance, usage, and any events which might warrant further attention. When a customer logs into InfoSight, they will get a lists all of their owned arrays displayed, as well as space utilization and events from the array. They can then go ahead and highlight specific areas such as CPU/networking/cache. There have been some new enhancements added for the “Assets”, “Capacity” and “Performance” tabs. This is a short 2 minute video on Nimble Storage’s InfoSight capabilities:

VAAI

When I last spoke to Nimble, we discussed support for VMware’s Array Integration APIs (VAAI). At the time, they had the Write_Same primitive (for offloading Zero operations), they had Hardware Assisted Locking (ATS) which enables ESXi hosts to mitigate the use of SCSI reservations for VMFS volume locks, and they also had the UNMAP primitive, which enables VMFS volumes built on thin provisioned disks to do space reclamation after storage vMotion or VM deletion. Wen informs me that they also have the Thin Provisioning Stun, used for suspending VMs when the underlying datastore is out of space. The one primitive which is missing still is the XCOPY primitive, a primitive which can improve migration and clone operation performance, such as Storage vMotion.

PSP – Path Selection Policy

Nimble have their own Path Selection Policy, Nimble_PSP_Selected. It comes in the form of a VIB for ease of installation. No additional configuration is required by an administrator; the PSA layer in the VMkernel recognizes the volume from the Nimble array and picks it up the special PSP automatically, along with optimal path switching settings for each and every volume. No further action is needed when the Nimble Storage environment is scaled out.

vSphere Plugin

Nimble have a very nice plugin to vSphere so you can manage your virtualization environment and storage all in one place. This has recently been enhanced too, with new workflows added such as datastore removal, VMFS datastore grow, enhanced add/clone datastore workflows and vCenter task integration. This ~5 minute video shows off most of the integration. Although it is still a plugin for the C# client, my understanding is that a web client plugin is in the works. The video also shows how Nimble’s PSP is integrated.

Site Recovery Manager/DR

Nimble has had SRM integration since SRM 5.1.  Here’s a link to the co-authored best practices guide with VMware’s Ken Werneburg (http://info.nimblestorage.com/bpg-vmware-srm.html)

Nimble OS 2.1

The next release of the Nimble OS is due out shortly, with some customers already testing with the GA candidate. Wen tells me that Nimble are closely monitoring their customer upgrades to ensure that everything goes smoothly at GA time. The major enhancements to this update (that could be shared) are in the area of InfoSight improvements and networking.

There were a few enhancements which Wen could still not share with me. However he was able to say that this new release will support 802.1Q VLANs on the data path which now means that separate VLANs can be created for different parts of an organizations sharing the same array, something Wen tells me that a lot of their customers are looking for.

VVol Ready

To finish our conversation, Wen and I talked about the Virtual Volumes (VVol) initiative. Nimble Storage have partnered with VMware very early on to deliver on this. Wen tells me that they are already VVol ready over at Nimble Storage, and have demoed a number of capabilities around this already. I’ll close with this final ~5 minute video that demonstrates the VVol capabilities that Nimble Storage has already implemented. The video captures the integration VVols gives to operations like VM Snapshots and VM Clones, and how they are automatically offloaded to the array, and how Virtual Machine I/O can now be identified on a per VM basis. Roll on VVols!

To close, I’ve heard a rumour that Nimble Storage have some major announcements lined up for June 11th. The catch phrase is “Adaptive Flash overwrites the rulebook”. Stay tuned. As I learn more, I’ll be sure to share.

4 Replies to “Nimble Storage revisited – a chat with Wen Yu”

  1. Great post, as always, very well explained and surmised the capability of this storage, understood it now very well.

Comments are closed.