WHIPTAIL Announce 4.1.1 Update

Whiptail LogoLast week, I presented at the UK National VMUG. I took the opportunity to catch up with Darren Williams (Technical Director, EMEA & APAC) of WHIPTAIL who was also presenting at the event. My first introduction to WHIPTAIL came last year when I first met Darren at another user group meeting, and I posted about their XLR8R array on the vSphere storage blog. Darren & I discussed the changes which WHIPTAIL has undergone in the past 12 months since we last spoke, including the launch of a new range of scale out storage arrays, as well as the new features in WHIPTAIL’s soon the be released 4.1.1 update.

For those of you who do not know WHIPTAIL, they are an all flash storage array vendor. In the past 12 months, they have relaunched the original XLR8R under the new name of ACCELA, and introduced a new scale out array. The new scale out model is called INVICTA. This is WHIPTAIL’s next generation of storage array, building on top of the features and functionality which the first introduced to the market with the XLR8R  array.

ACCELA & INVICTA

Whiptail InvictaWHIPTAIL ACCELA & INVICTA continue to be all flash arrays (NAND Flash – MLC). The arrays continue to support a range of protocols including Fibre Channel, NFS & iSCSI, and WHIPTAIL claim that the arrays can achieve somewhere between 250,000 & 650,000 IOPS depending on the model & configuration. The ACCELA is a stand-alone array, whereas the INVICTA is a scale out model which uses what WHIPTAIL refer to as a (top of rack) Silicon Storage Router (SSR). It comes in 3 base configurations (6TB – 24TB), but the base INVICTA model comprises of the SSR and two shelves of SSD (24 drives per shelve). It can scale up to 6 shelves (expanding to 72TB capacity) for a fully configured system. There is no central array controller; instead each shelf has its own controller, and availability is achieved by mirroring between the shelves. From a vSphere perspective, these appear as ALUA storage arrays with some paths appearing as optimal and others appearing as non-optimal. If the event of a path failure, the non-optimal path assumes the role of optimal.

Secret Sauce

Darren gave me a great description of how WHIPTAIL do their wear leveling and avoid write amplification, which are major issues for MLC in an Enterprise solution if not managed correctly. First off, there can be many random I/Os of different sizes coming into the array (e.g. 4KB, 8KB, 64KB). WHIPTAIL’s Race Runner OS Block Translation Layer (BTL) aggregates these random writes into a sequential write as MLC is not designed for random I/O and performs best with sequential writes. WHIPTAIL has a 48MB NVRAM, and the random writes are placed into the NVRAM to enable them to do a sequential write to SSDs – this 48MB size matches Whiptail’s hardware shelf configuration of 24 drives x 2MB. When the write is in NVRAM, it is acknowledged back to the host. Darren gave a great visual aid to understand how this works using the game of “tetris”. When a full stripe/chunk of 48MB (taken from various 4KB, 8KB & 64KB writes) is achieved, it gets flushed to SSD. If they don’t get a full chunk to do a sequential write in 500ms, the rest of the remaining space is padded out with zeroes and the NVRAM is flushed to SSD. In the event of a power failure, a capacitor keeps the NVRAM persistent long enough to flush the contents to SSD on WHIPTAIL’s ring buffer.

Another key feature is that WHIPTAIL never does an ‘erase on write’ operation to SSD. This alleviates the undesirable write amplification issue whereby a cell may need to be written to a number of times to handle a single write. However cells which contain old unused data have to be erased at some point before they can be used again. WHIPTAIL has a garbage collection process, which is basically an algorithm used to pick the next best block to erase and rewrite, always running in the background.

vSphere Integration Features in 4.1.1

A number of very interesting vSphere integration points appear in the 4.1.1 update. The first of these is support for VAAI (vSphere APIs for Array Integration). This is always nice to see, and of course the benefits are obvious. If an ESXi host can offload storage tasks to the storage array, it frees itself up for other vSphere specific tasks. Darren tells me that WHIPTAIL currently support the three block primitives (ATS, XCOPY & Write_Same) as well as the Thin Provisioning UNMAP primitive. There is no support for NAS primitives just yet.

The second integration point is that the WHIPTAIL UI has a vCenter plugin. One of the major features of the WHIPTAIL UI is its simplicity. You basically create your initiator group (IG), create your LUN, then map the LUN to the IG. That’s it. And the vCenter plugin has the exact same simplicity as the stand-alone UI. The UI can also provide performance monitoring in real-time, or up to one week of historical performance data. If you need to go back even further, WHIPTAIL has the ability to use their auto-support portal to gather and display this information for you.

 Other 4.1.1 Enhancements

A major enhancement over the previous XLR8R model is the reduction in the number of hot spares which an array needs to carry, along with some very fast rebuild times in the event of a failure. WHIPTAIL have now moved to just one hot-spare per shelf of 24 drives, and claim that rebuild times are down to 1 hour should a drive fail. Volumes are now protected via parity and one hot spare, which means that they can still handle a scenario where two drive failures occur.

The other major enhancement which WHIPTAIL introduced is asynchronous replication, albeit there is no support for Site Recovery Manager (SRM). WHIPTAIL decided to go with two replication technologies – what they refer to as Private Target and Open Target async replication. The private target approach is where there is continuous data replication between WHIPTAIL arrays using snapshot technology. The open target replication is where there is data replication between a WHIPTAIL array and Windows or Linux OS with any storage behind the host. Both can be used as a replication solution for Disaster Recovery (DR) but the choice depends on the performance you require in DR situations. Using Private Replication one could architect a solution which will give you equivalent performance. If that isn’t required and you can live with smaller amount of IOPS, latency and bandwidth when you go in to DR, you can utilise the Open Target mode. Open Mode can also be useful if the Windows or Linux host(s) are hosted by Amazon EC2 or an equivalent service provider. This is not recommended for DR, but could perhaps be useful in other backup and replication scenarios.

So as you can see, the INVICTA has some nice vSphere & cloud integration points. ACCELA & INVICTA arrays are both on the VMware HCL, and are supported with vSphere 5.0 & 5.1. Worth checking out if you are in the market for a flash array.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

9 Replies to “WHIPTAIL Announce 4.1.1 Update”

  1. Interesting solution. But IMHO it is just another on the list ! What about dedup and compression ? And their scale-out architecture is quite useless with only 72 To, in 14 U height. (Their hardware does not seem very well designed and not enough dense).
    And last, the “RAID 5 like” used for data reliability seems far away from Vradi, RAID-SE or RAID-3D competitors offer.

    Well, all this sounds much ado about nothing …

    1. Not sure I would agree. I know that Whiptail used to have the dedupe & compression features in the previous architecture, and I’m sure its only a matter of time before they are introduced into the newer architecture. Maybe the scalability isn’t there for every use case, but I’m sure it meets the needs of many customers. However, I don’t work for Whiptail, so perhaps they can respond in kind if they see your comment.

    2. There is a fine line between capacity and performance when you design an architecture. Most dedupe implementations force all traffic through them and restrict line rate performance, others have to do them after they’ve been written as part of a schedule. The whole reason why WHIPTAIL has been successful is to the fact that the architecture concentrates on providing high levels of performance where it is needed.

      As for your comment about scale out architecture and providing 72TB being useless I would question this as there aren’t many flash implementations that can compete with this figure. The other unique point about the INVICTA is the ability to add storage nodes on top of the base configuration and improve performance and bandwidth linearly. With this in mind it allows WHIPTAIL to increase the amount of storage nodes above the current six and WHIPTAIL has recently announced the INFINITY product due to be released in Q1 this year with the aim to support up to 15 shelves and then up to 30 shelves, which allows scalability up to 360TB and IOP figures to 4m IOPS.

      If 360TB and 4m IOPS isn’t enough for you in one array how much performance do you need and what user case.

Comments are closed.