First off, there are a number of supported arrays from GreenBytes on the VMware HCL, namely their older GB series of arrays (which are no longer actively marketed) and their HA2200 model. However, their newer IOOE – IO Offload Engine – has recently appeared on our HCL, too. We talked a little bit about the IOOE, and where GreenBytes positions this product. IOOE is aimed squarely at the VDI market, but with a scale out aspect. It can scale starting at 1,000 VDI desktops, but can move up to 4,500 seats for a fully populated array. The scalability is based on the SSD capacity of the array. If additional seats over and above 4,500 are required, a new array can be deployed.
IOOE presents LUNs to ESXi host over 10Gb iSCSI or 8Gb Fiber Channel (however, checking the HCL, it looks like it may be only qualified for iSCSI at the moment, with FC coming down the line).
GreenBytes IOOE offers full support for the complete set of block primitives (Block Zero,Full Copy, Hardware Assisted Locking (ATS) & Thin Provisioning). This is always good to see.
High Availability in the IOOE is provided by redundant and hot swappable components such as drives, fans and power supplies, as well as completely redundant storage controllers. Every component is replicated with dual heads & dual paths. GreenBytes also use patented RAID technology (which is a mixture of RAID 0 and RAID 6). So basically full redundancy of all components in the array.
As mentioned, everything is redundant, so the IOOE ships in a 6U format. This includes 2 x 2U controller chassis and a 1 x 2U SAS drive enclosure chassis which can hold up to 24 x 400GB SSD.
Inline Deduplication and Compression
This is the one aspect that the guys wanted to call out as being particularly unique to their IOOE storage solution. Using their patented zero latency inline deduplication technology, GreenBytes can reduce the space consumed by linked clones by up to 80%, and up to 97% for full clones. OK, so we’ve heard excellent compression/dedupe figures from other array vendors before, what makes this so special? Well, in this case, the dedupe is done in a single machine instruction, implying 0 latency incurred. This is why GreenBytes tell their VDI customers that they can use full clones instead of linked clones – they get great space savings with no impact to the I/O.
GreenBytes support snapshots at the volume level which are fully deduplicated for space-saving.
Replication & DR
Replication is also done at the volume level. There are a few neat things which GreenBytes does from a replication perspective, such as the ability to do many to 1 replication to a single target. They also have a patented deduplicated replication, where they only replicate unique changed blocks.
Unfortunately, there is no integration with Site Recovery Manager at this time for DR, but according to Steve (CEO), this certification is imminent with a GreenBytes SRA in the works. GreenBytes is working with VMware to get this completed ASAP.
There is no vSphere Management plugin to manage GreenBytes from the vSphere UI. This is a shame, and I hope GreenBytes will consider this going forward as the ability to manage both the virtual infrastructure and storage infrastructure from a single management console has huge benefits for administrators.
I asked Steve what it was that made IOOE unique in a market where lots of storage vendors are positioning themselves as the solution to the VDI storage problem. The first thing which Steve mentioned was that because of their patented dedupe technology, customers can feel confident is using fully cloned desktops rather than using VMware linked clones when provisioning desktops. Steve added that linked clones may still be used, but I think customers would personally like to use full clones over linked clones if they are consuming what amounts to the same amount of physical disk space on the array.
So what about vIO?
Well, that’s the hardware product taken care of. What about this vIO product that was mentioned at the beginning of the article. Well, for all intents and purpose, vIO can be considered as providing all of the functionality that the IOOE has, but is deployed in the form of a storage appliance. In a nutshell, vIO can be considered as a virtual storage appliance which uses some flash storage to divert boot storms and swapping away from SAN, improving the overall performance of storage in a VDI environment.
The vIO appliance is designed to run on flash (which could be PCIe, local SSD or flash on an external array). The vIO virtual storage appliance takes storage mapped from the back-end storage array, and presents datastores (NFS, if I am not mistaken) to the hypervisor. Any virtual desktops deployed to the NFS datastore can now leverage the I/O acceleration and dedupe/compression capabilities of the vIO virtual storage appliance. Currently, vIO only works on the VMware ESXi hypervisor at this time.
How is vIO sold?
Steve informed me that vIO is licensing in 100 seats increments, so clearly aimed at the small to mid-size VDI markets. Customers requiring larger seat counts should consider the physical IOOE product.
GreenBytes are doing some very cool things at the moment. It is very interesting to see more movement in I/O acceleration via virtual appliance space – using virtual storage appliances with some ‘secret sauce’ to accelerate the I/O in VDI deployments seems to be resonating with folks as a low-cost way of handling storage performance issues.
While GreenBytes have a lot of decent integration points, it will be even better when there is a complete DR solution using SRM. It would also be nice to see management integration with the vSphere client.
But for those of you contemplating a VDI roll-out, GreenBytes may have products to help you on the journey. What’s nice is that you can start out small, and scale as necessary, either with IOOE or with vIO.
[Update] Greenbytes were acquired by Oracle in May 2014. More here