Site icon CormacHogan.com

WHIPTAIL Announce 4.1.1 Update

Last week, I presented at the UK National VMUG. I took the opportunity to catch up with Darren Williams (Technical Director, EMEA & APAC) of WHIPTAIL who was also presenting at the event. My first introduction to WHIPTAIL came last year when I first met Darren at another user group meeting, and I posted about their XLR8R array on the vSphere storage blog. Darren & I discussed the changes which WHIPTAIL has undergone in the past 12 months since we last spoke, including the launch of a new range of scale out storage arrays, as well as the new features in WHIPTAIL’s soon the be released 4.1.1 update.

For those of you who do not know WHIPTAIL, they are an all flash storage array vendor. In the past 12 months, they have relaunched the original XLR8R under the new name of ACCELA, and introduced a new scale out array. The new scale out model is called INVICTA. This is WHIPTAIL’s next generation of storage array, building on top of the features and functionality which the first introduced to the market with the XLR8R  array.

ACCELA & INVICTA

WHIPTAIL ACCELA & INVICTA continue to be all flash arrays (NAND Flash – MLC). The arrays continue to support a range of protocols including Fibre Channel, NFS & iSCSI, and WHIPTAIL claim that the arrays can achieve somewhere between 250,000 & 650,000 IOPS depending on the model & configuration. The ACCELA is a stand-alone array, whereas the INVICTA is a scale out model which uses what WHIPTAIL refer to as a (top of rack) Silicon Storage Router (SSR). It comes in 3 base configurations (6TB – 24TB), but the base INVICTA model comprises of the SSR and two shelves of SSD (24 drives per shelve). It can scale up to 6 shelves (expanding to 72TB capacity) for a fully configured system. There is no central array controller; instead each shelf has its own controller, and availability is achieved by mirroring between the shelves. From a vSphere perspective, these appear as ALUA storage arrays with some paths appearing as optimal and others appearing as non-optimal. If the event of a path failure, the non-optimal path assumes the role of optimal.

Secret Sauce

Darren gave me a great description of how WHIPTAIL do their wear leveling and avoid write amplification, which are major issues for MLC in an Enterprise solution if not managed correctly. First off, there can be many random I/Os of different sizes coming into the array (e.g. 4KB, 8KB, 64KB). WHIPTAIL’s Race Runner OS Block Translation Layer (BTL) aggregates these random writes into a sequential write as MLC is not designed for random I/O and performs best with sequential writes. WHIPTAIL has a 48MB NVRAM, and the random writes are placed into the NVRAM to enable them to do a sequential write to SSDs – this 48MB size matches Whiptail’s hardware shelf configuration of 24 drives x 2MB. When the write is in NVRAM, it is acknowledged back to the host. Darren gave a great visual aid to understand how this works using the game of “tetris”. When a full stripe/chunk of 48MB (taken from various 4KB, 8KB & 64KB writes) is achieved, it gets flushed to SSD. If they don’t get a full chunk to do a sequential write in 500ms, the rest of the remaining space is padded out with zeroes and the NVRAM is flushed to SSD. In the event of a power failure, a capacitor keeps the NVRAM persistent long enough to flush the contents to SSD on WHIPTAIL’s ring buffer.

Another key feature is that WHIPTAIL never does an ‘erase on write’ operation to SSD. This alleviates the undesirable write amplification issue whereby a cell may need to be written to a number of times to handle a single write. However cells which contain old unused data have to be erased at some point before they can be used again. WHIPTAIL has a garbage collection process, which is basically an algorithm used to pick the next best block to erase and rewrite, always running in the background.

vSphere Integration Features in 4.1.1

A number of very interesting vSphere integration points appear in the 4.1.1 update. The first of these is support for VAAI (vSphere APIs for Array Integration). This is always nice to see, and of course the benefits are obvious. If an ESXi host can offload storage tasks to the storage array, it frees itself up for other vSphere specific tasks. Darren tells me that WHIPTAIL currently support the three block primitives (ATS, XCOPY & Write_Same) as well as the Thin Provisioning UNMAP primitive. There is no support for NAS primitives just yet.

The second integration point is that the WHIPTAIL UI has a vCenter plugin. One of the major features of the WHIPTAIL UI is its simplicity. You basically create your initiator group (IG), create your LUN, then map the LUN to the IG. That’s it. And the vCenter plugin has the exact same simplicity as the stand-alone UI. The UI can also provide performance monitoring in real-time, or up to one week of historical performance data. If you need to go back even further, WHIPTAIL has the ability to use their auto-support portal to gather and display this information for you.

 Other 4.1.1 Enhancements

A major enhancement over the previous XLR8R model is the reduction in the number of hot spares which an array needs to carry, along with some very fast rebuild times in the event of a failure. WHIPTAIL have now moved to just one hot-spare per shelf of 24 drives, and claim that rebuild times are down to 1 hour should a drive fail. Volumes are now protected via parity and one hot spare, which means that they can still handle a scenario where two drive failures occur.

The other major enhancement which WHIPTAIL introduced is asynchronous replication, albeit there is no support for Site Recovery Manager (SRM). WHIPTAIL decided to go with two replication technologies – what they refer to as Private Target and Open Target async replication. The private target approach is where there is continuous data replication between WHIPTAIL arrays using snapshot technology. The open target replication is where there is data replication between a WHIPTAIL array and Windows or Linux OS with any storage behind the host. Both can be used as a replication solution for Disaster Recovery (DR) but the choice depends on the performance you require in DR situations. Using Private Replication one could architect a solution which will give you equivalent performance. If that isn’t required and you can live with smaller amount of IOPS, latency and bandwidth when you go in to DR, you can utilise the Open Target mode. Open Mode can also be useful if the Windows or Linux host(s) are hosted by Amazon EC2 or an equivalent service provider. This is not recommended for DR, but could perhaps be useful in other backup and replication scenarios.

So as you can see, the INVICTA has some nice vSphere & cloud integration points. ACCELA & INVICTA arrays are both on the VMware HCL, and are supported with vSphere 5.0 & 5.1. Worth checking out if you are in the market for a flash array.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

Exit mobile version