COHO Data – out of stealth today!

cohoThere is a new storage vendor on the block – COHO DATA – who just exited stealth and launched their new product with the tag ‘Storage for the Cloud Generation’. I had the opportunity recently to catch up with Andy Warfield, co-founder and CTO of COHO DATA. Andy has a long history of involvement with storage and virtualization. A graduate of Cambridge University in the UK, he was involved in XenSource where he developed much of the low-level storage integration mechanisms for external storage arrays like NetApp and DELL EqualLogic. Andy gave me an in-depth interview related to their new product, and why certain development directions were taken by COHO DATA.

The Vision

Andy told me that they spoke to many, many customers about storage management and usability before embarking on their new project. He told me that the vast majority of customers that they spoke to about storage had the same complaints. These were frustrations around difficulty in refreshing their storage infrastructure as new technologies became available (fork lift upgrades), building silos for certain departments and even applications, not being able to understand the performance and behavior of their storage (storage is a black box), the difficulty in troubleshooting and diagnosing issue end-to-end, and finally having to manage their system via aging management interfaces which were always cluttered and complex. To be honest, these seem to be the same sorts of complaints I hear when talking to customers about storage too, especially the performance, troubleshooting and management aspects. I’m sure many of you agree with this. Andy also mentioned that when he attended conferences and sat in on storage presentations from various vendors, it was all about the IOPS, with very little discussions on what the broader storage problems were, and how these vendors were going to address them.

Andy then told me that he wanted to achieve a number of goals with their new storage platform. First, they wanted to be able to use commodity components in their platform, make it perform like crazy, and then wrap it up in a way that customer could simply plug them into their environment and scale them out in a linear fashion – a bottleneck free architecture. They also wanted to create a management interface which gives all the necessary details to allow an engineer or customer to quickly diagnose any performance problems which might arise. He feels that COHO DATA achieves just that.

COHO MicroArrays and COHO Chassis

First things first – this is not a converged platform, this is storage platform. The platform is made up of MicroArrays. Each MicroArray contains compute, network, disk and flash. The MicroArray may be considered a storage or data hypervisor. COHO claim that a single chassis consisting of two MicroArrays can deliver a predictable 180K IOPS. It is sold as a ‘rack’ configuration, which include a top of rack 10Gb DataStream switch. Using OpenFlow, COHO can manage all of the I/O routing, data placement & load balancing. It is this management at the switch level that guarantees linear performance and low latency as the platform scales. The COHO chassis is the building blocks of the platform, allowing scale out from 1 to 100s of MicroArrays, and thus from TBs to PBs. Scale out is very simple, as the MicroArrays are auto-discovered and configured in a matter of minutes.

Coho Data Rack Layout Marketing Ref ArchitectureOne point to note is that there is no specific hardware redundancy built into these MicroArrays.  As mentioned previously, COHO wanted to stick with commodity systems, and so they deal with issues of resilience and redundancy in the higher layers of the stack.  COHO systems are deployed with redundant switches, and they butterfly the connections between clients, switches, and the ports on the MicroArrays.  As a result, even if a complete switch fails, it maintains complete connectivity to all the data, but loses half of the available bandwidth.

Under the covers, COHO DATA have implemented an object store. The platform presents NFS datastores to the ESXi hosts. However, this storage platform is not limited to just virtual machines. COHO DATA have plans to expand this platform to other general purpose storage use cases in the future.

Data Profiles

COHO supports a tiering approach to their storage through the use of data policies. You can select a particular data policy when deploying a VM, and using the smarts of the MicroArray, that objects which make up that virtual machine can be placed on different storage components across the platform, including a combination of flash and HDD.

COHO supports application-specific performance policies in their own management UI but Andy mentioned that they are currently exploring using VASA for enabling this configuration through vCenter.  However, they feel their implementation is much superior as it lets you set a profile at this finer granularity and even change it dynamically on a per-object basis, something that VASA does not allow you to do right now.

Object Placement and Layout Policy

COHO has replication technology based around the concept of replication domains. In other words, if there is a failure of a MicroArray, chassis or rack, the replication domain can ensure that there is still a full copy of the data available. We mentioned that objects can be placed on different storage components across the platform – this is based on the ability to store virtual machine as objects. These object are made up of multiple chunks (in many ways, there is a similarity to how VMware’s Virtual SAN stores virtual machine objects). Each object has a mapping table which allows the chunks of the objects to be mapped from many places, which allows the data placement algorithms to keep that linear performance and low latency. And so while the object on-disk layout might be made up of a mix of flash and disk, there is one set of metadata for the complete object. This make is very simple for COHO to do storage services like snapshots, deduplication and so on. However, it was pointed out that deduplication or compression is not in the initial release, but it is something that is on their road-map. Snapshot is in there, and it is done at the object level, i.e. virtual machine disk level (VMDK).

I also asked Andy about a DR solution, i.e. allow replication of objects across sites. I guess this could also be a building block for a stretched COHO solution. Andy said that this functionality will not be in the initial release but is something that they have on the roadmap.

User Interface

dashboard_coho_fullAndy then showed me a demo of their UI for managing the platform. I have to admit that it was very intuitive to use, and allowed one to drill down very easily to low-level views of the platform. The one comment I got a lot from customers regarding Virtual SAN was not to have it a “black box” – they needed something which can be easily understood, and problems easily diagnosed. I think COHO DATA have something nice in this regard, and it is obvious that a lot of time and effort went into making it so. It’s probably one of the nicest interfaces I’ve seen since Tintri’s.

vSphere Interoperability

I then asked Andy a number of questions around vSphere interoperability.

SRM/vR – planning to have interoperability with SRM shortly in the v1.5 release next year.

VDP/VDPA – planning to have interoperability with VDP/VDPA with the v1.0 release later this year.

VAAI – COHO will have support for the VAAI-NAS File Cloning primitives in v1.0 and are planning to support the other primitives shortly thereafter.

UI Plugin – COHO’s  DataStream Management UI is currently standalone, but they’ve already started working on integration into the vSphere web client and plan to release this early next year

What about the price?

I put this question to Andy. The response was that each block/chassis with 2 x MicroArrays contains 39TB, and that the price is $2.50 per GB. If I extrapolate that out, that looks like just under USLP$100K per block/chassis. After querying this with Andy, he confirmed that my math was correct. The SDN-enabled DataStream Switch is sold separately for USLP$30K.  It is a 52 port 10GbE Switch and can be used to scale 6-8 DataStream Chassis into an 300TB+ cluster of MicroArrays.  Customers can scale by adding single MicroArrays at USLP$50K each.

Where can I learn more?

Visit cohodata.com for more information.

14 Replies to “COHO Data – out of stealth today!”

  1. For a minute my first thought was if COHO is the first two letters Cormac and Hogan 🙂

  2. Sounds like a really interesting solution, and with a clear use case in mind. With that size and price point, is not for sure a solution aimed at staying at few nodes, since seems that savings are coming if the cluster has several nodes, targeting to (almost) PB deployments.

    This make me even more interested in meeting them at Storage Field Day. Thanks Cormac for the introduction.

  3. In hardware spec atleast, it looks to be quite similar to a Nutanix solution… Only without presenting a converged comoute layer.

    I’m curious why such a (relatively) large amount of aggregated compute is needed for just the storage platform… And if unltimately its a bit of a waste.

    Its not ‘just’ a skinned Ceph cluster is it?

    1. This is a great question — but bare with me for a sec while I try to turn it upside down:

      We started working with emerging PCIe flash hardware a couple of years ago, and were pretty impressed to realize that a _single_ card was capable of completely saturating a 10Gb network link… stacking up multiple cards behind that link, as we’ve done with disks traditional storage architectures, was leaving performance on the table. Moreover, handling the work involved in moving data between the single flash device and the 10Gb connection, and implementing even a light weight storage stack consumes a fair chunk of CPU.

      These new flash form factors that live on PCIe (and the newer technologies like PCM that are going to follow them over the next few years) are astoundingly fast, but they are also pretty expensive. They often cost as much as the server hardware that houses them. So building a storage stack that involves _balanced_ flash, network, and CPU — which is what we’ve done in the product — is really an attempt to get you the most value out of an investment in high-performance flash.

      So getting back to your question, we absolutely have a “(relatively) large amount of aggregated compute” — the flash is so capable that we have to! When the system is serving requests at saturation, it is making full use of that compute: I wouldn’t want to cannibalize the value of expensive shared storage resources by making it fight with co-located application workloads — that would result in a constrained amount of memory and compute. Instead, I’d like to present storage in a balanced, high performance, and scalable way, and let you scale and maintain your compute resources (and VMM licenses) in direct response to your application demands.

      When we aren’t going full-throttle, those CPUs let us do some pretty neat stuff in the product that storage implementations haven’t traditionally had the chops for. Things like performing background analytics over your storage workloads, and doing workload-specific tuning of layout and tiering in response to how you access your data.

      Regarding the Ceph question: No — absolutely not. We built the system from the ground up, with the specific intention of virtualizing high-performance flash, eliminating as many unnecessary layers as we could, and getting solid scale out performance.

      If you’re interested in more details, I’ll go into details on this at tech field day in a few weeks, and there’s also an upcoming webinar (our first ever) where I’ll walk through these things in more detail. Information on both of these is on http://www.cohodata.com.

      Thanks again for the questions!

      1. Thanks Andrew for the detailed reply.

        Since your architecture is built for sustained 10g NIC saturation out of each node, I can see the need for oodles of CPU.

        Should be lightning fast! Good work.

        I look forward to the Tech Field Day videos (Wish I could be there in person one day!)

  4. Looks like to me COHO competes with Isilon but eventually cheaper with a swiss-army-knife storage style on the roadmap…

    Is the price per GB the only differentiator?

    1. We’ve had conversations with people over the the past few months where they’ve described our product as “Isilon done right”, so the goal of building a great scale-out storage product is certainly a strong aspect of what we are pushing on. Here are a couple of key differences:

      1. Like most enterprise storage products, Isilon wasn’t written for flash — especially the levels of performance you get in PCIe flash. The approach we’ve taken borrows a lot from our experience with Xen and CPU virtualization, just as the CPU was an expensive and undersubscribed resource a decade ago, PCIe flash presents a similar challenge for utilization. The base layer of our design is virtualizing high-performance flash and attaching it directly to the network with a significant focus on avoiding any overheads that might interfere with performance.

      2. Storage systems have always struggled to scale metadata and request processing as systems grow. Integrating with a 10Gb SDN switch has allowed us to transparently scale the storage controller as we scale the underlying media. Clients see a single NFS IP address, but we are able to scale the implementation of that NFS controller across all the microarrays, and actively migrate client connections across them in response to load and data locality.

      Isilon has been bought for scalable capacity, but not so much for the sort of scalable and random-access performance that are a reality in enterprise environments today. All-flash appliances give you performance, but they cost a bundle and don’t give you capacity. We’re aiming for the trifecta: scale-out _and_ high performance, but taking advantage of commodity hardware and hybrid media to keep keep the system affordable, and to allow systems to get continued value for improvements to the state of the art in commodity flash and network hardware as they grow and evolve.

Comments are closed.