To begin with, Nick opened with a discussion regarding data pain points. In many data centers, there is an absolute plethora of storage products and devices. There is the tier1 storage (e.g. the really important data, maybe running on all-flash arrays or hybrid arrays), and then there is the tier2/secondary storage. Nick goes on to say that the tier2 storage could be made up from a combination of backup storage, analytics data and other operational workloads (snapshots, clones, replicas, etc). And of course, we spend most of our cycles taking care of our tier1 storage, making sure it is all happy and performing well, and not worrying quite so much about the other stuff. So much so, that in many cases, we miss things that go wrong in tier2. For example, a runaway/zombie snapshot that no-one notices is sitting there for weeks or months, growing and growing and growing. Or a test-dev environment that was set up once, and forgotten about. And this costs us $$$. I’m pretty sure everyone related to these situations. So now, what can Cohesity do to help you manage this secondary data/storage?
Cohesity want to offer you a solution to the tier2 storage issues. They will provide a scale out hardware appliance that is currently tested at 16 nodes, but that is growing larger as we speak. New nodes are auto-discovered. They offer three main features; the first is that their appliance offers protection (e.g. a backup solution), the second is that it offers a distributed NFS datastore to your vSphere environment, presumably to allow you to do cool things with the data you have backed up, such as present a previous point in time copy of your virtual machine (that’s a bit of conjecture on my part – Nick didn’t actually state this), and thirdly, data analytics on the data that you have backed up. The great thing with the analytics is that (a) it should help with locating this “dark data” from leftover snaps of dead test-dev environments, but (b) it can also project future storage consumption and purchase requirements. In the Q&A, the question was asked about archiving to the cloud. This isn’t currently available, but reading between the lines (Nick balancing that fine line about what he can tell us, and what he would like to tell us), this would seem to be one of their goals going forward.
We didn’t get a good look at the UI, as apparently it is undergoing a redesign, but Nick did mention that it will be HTML5, allowing it to work on mobile devices, etc.
Nick closed the webinar with a discussion around their Open SDK, and the hope is that going forward, Cohesity will become the data platform for lots of new applications/workloads such as Docker, Hadoop, etc, but at the moment it looks like the focus is on virtual machine workloads.
The 30 minutes that Nick had to showcase the Cohesity solution was very informative. I will admit that, to me anyway, there seems to be quite a similarity between the Cohesity solution and the Rubrik solution which was announced earlier this year. This is no bad thing, as I think there is a lot of pain with backup at present, and having a converged backup storage and backup software solution resonates well with administrators.
If you want to learn more about the Cohesity solution, there is an early access program – more info here.
Cohesity are also at VMworld 2015. I’m definitely going to grab some time with Nick and the Cohesity team and try to fill in the gaps in my knowledge. You can find them at booth #428 in the Solutions Exchange. And there will be Piña Coladas apparently 🙂