Cohesity have released their next version (2.0) of the Cohesity Data Platform. I met Cohesity at VMworld 2015, and I wrote about my first impressions of the solution in a blog post from back in August 2015. In a nutshell, Cohesity are positioning their Data Platform as hyper-converged secondary storage. They want to stop the silo’ing of different storage for backups, file shares and analytics in the data center, and offer you a single platform for all of your secondary storage needs. Now they are ready with the next version, so lets take a quick look at what is coming in this new release.
According to Nick Howell, Tech Evangelist at Cohesity, this new UI came about based on direct feedback from the community and users. I know that when we last met at VMworld, they were very eager to get input on how best to lay out the user interface, and they have spent a lot of time on getting this just right. Cohesity have been doing lots of work around the dashboard to make it as simple and as consumable as possible. Here is an updated dashboard that Nick shared with me:
I think this is a feature that many customers will appreciate; the ability to safely replicate your backups (and other secondary storage content) to a remote site. Cohesity 2.0 introduces a new capability for site-to-site replication between Cohesity Clusters. Once its configured, replication is initiated by simply making the correct settings in the profile. In the Cohesity platform, profiles determine RPO/RTO, location, indexing, etc.
Support for SMB Protocol
In the original V1 version, Cohesity supported an NFS file share which could be presented to your vSphere environment. In 2.0, Cohesity introduce support for SMB 3.0. One assumes that this will enable VMs to leverage file services, and not just vSphere hosts. I asked Nick about this. He told me that “the introduction of SMB, at least in this iteration, was intended to lay the foundation for many consumption cases.” He went on to say that “this [feature] is going to enable multiple use-cases across the board. Everything from mapping a home drive natively, allowing the 3rd party software vendors to use us as a piece of SMB target storage, and even some of the newer Win2012 and Hyper-V SMB 3.0 use cases.”
Since vSphere is already covered with Cohesity’s NFS-first approach in V1, in 2.0 “any Windows OS, VM or not, can now mount UNC path file systems directly from the Cohesity platform, which brings along with it ALLLLL of the storage-side features we bring to the table once the data is on the box.”
Cohesity V2 now has hardware-accelerated AES 256-bit FIPS-compatible encryption.
Automated VM Cloning for Test/Dev
This is all about spinning up a previous point-in-time (PIT) backup copy of your VM for test/dev purposes. Cohesity 2.0 now has automated cloning of backup VMs to speed up the process of spinning up zero-space clones.
Cohesity 2.0 enables customers to take advantage of less expensive cloud storage. With 2.0, customers now have the ability to place cold or least-used data on Google Cloud Storage™ Nearline, Microsoft Azure and/or Amazon S3, Glacier.
Another new feature of the Cohesity Data Platform 2.0 is the ability to do adaptive throttling of backup streams. I guess what this means is that during any other use of the secondary storage, such as data analytics, administrators can throttle the backups temporarily to get the other jobs, such as reporting, completed. Thinking about it, I guess you can also prioritize certain backup streams over others, in the case of critical VMs where you want the backup to complete as quickly as possible. When I raised this query with Nick, he told me that “the adaptive throttling is a sort of automatic leveling default that CAN be modified if the user requires. These can also be manipulated with the QoS settings when creating a Data Protection profile. Cohesity expose a ‘Low Med High’ to the user, but on the back-end, this dictates the FIFO QoS priority. Most backup profiles are going to be ‘Low’ in nature, and this is going to allow many backup jobs to run in parallel across different environments, as well as allow us to continue to serve data for other things like file shares and test/dev workflows.”
There are no updates to the hardware itself in this release, although I am led to believe that Cohesity are looking at some of the new storage technologies that are appearing now, and seeing if these would be a good fit going forward (e.g. reducing the data center footprint of secondary storage even further). More on this over time I guess.
The Cohesity Data Platform 2.0 is available now at a starting price of $90,000 (USD). I asked Nick to explain this a little more. It seems that the $90k “starting at” price is for a 3-node C2300 (48TB raw, using 4TB drives, 800GB PCIe Flash cards per node). The more “flagship” Cohesity block is a 4-node C2500 (96TB raw, using 8TB drives, 1.6TB PCIe Flash cards per node). C2500 list price is $199k. That said, both of those include all necessary licenses at no additional cost. Nick said that this is also subject to change in the future, as that Cohesity are exploring “packages” to build around the OASIS platform for different customers who want to do different things (OASIS being the secret sauce at the heart of the Cohesity Data Platform).
For more information, visit www.cohesity.com. Or reach out to Nick Howell on twitter who would love to discuss the solution and use cases with you in more detail. Nick’s twitter handle is @datacenterdude.
I wish Cohesity every success in 2016.