This question has come up on a number of occasions in the past. It usually comes up when there is a question about scalability and the number of disk drives that can be supported on a single host that is participating in Virtual SAN. The configurations maximums for a Virtual SAN node is 5 disk groups, with each disk group containing 1 flash device and up to 7 capacity devices (these capacity devices are magnetic disks in hybrid configurations or flash devices in all-flash configurations). Now the inevitable next question is how is this configuration implemented on a physical server. How can I get to 35/40 devices in a single server. There are a few ways to do it.
A couple of months back, I wrote a short article on Rubrik. They were just coming out of stealth mode and had started an early access program. Since they had not officially launched, there wasn’t a lot that I was allowed to say about the company, other than give a high level overview. As they have now officially launched their r300 series of products, along with news of a massive $41 million Series B of funding, I can now share some additional details about their products and technology. Just to recap on what Rubrik do, they are offering a converged and scale-out backup software and backup storage appliance. The Rubrik appliance (Brik) is a “rack and go” architecture, with the ability to scale from three to thousands of nodes (unlimited) using industry standard 2U commodity appliance hardware.
The whole pitch is the idea that “backups suck”, and they want to give administrators a much better back and restore experience, similar to Apple’s ‘Time Machine’ feature.
There have been a number of queries around Virtual Volumes (VVols) and replication, especially since the release of KB article 2112039 which details all the interoperability aspects of VVols.
In Q1 of the KB, the question is asked “Which VMware Products are interoperable with Virtual Volumes (VVols)?” The response includes “VMware vSphere Replication 6.0.x”.
In Q2 of the KB, the question is asked “Which VMware Products are currently NOT interoperable with Virtual Volumes (VVols)?” The response includes “VMware Site Recovery Manager (SRM) 5.x to 6.0.x”
In Q4 of the KB, the question is asked “Which VMware vSphere 6.0.x features are currently NOT interoperable with Virtual Volumes (VVols)?” The response includes “Array-based replication”.
So where does that leave us from a replication/DR standpoint with VVols?
Before I begin, this isn’t really a feature of VSAN so to speak. In vSphere 6.0, you can also blink LEDs on disk drives without VSAN deployed. However, because of the scale up and scale out features in VSAN 6.0, where you can have very many disk drives and very many ESXi hosts, being able to identify a drive for replacement becomes very important.
So this is obviously a useful feature. And of course I wanted to test it out, see how it works, etc. In my 4 node cluster, I started to test this feature on some disks in each of the hosts. On 2 out of the 4 hosts, this worked fine. On the other 2, it did not. Eh? These were all identically configured hosts, all running 6.0 GA with the same controller and identical disks. In the ensuing investigation, this is what was found.
I first encountered Rubrik at this year’s Partner Exchange (PEX) 2015 in San Francisco. They had some promotional flyers made up labeled “Backup Still Sucks”. I guess a lot of people can relate to that. I had a chat with Julia Lee, who used to be a storage product marketing manager here at VMware, but recently moved to Rubrik. Rubrik’s pitch is that customers are currently stitching together backup software with backup storage in order to backup their virtual infrastructures – there is no seamless integration. Rubrik’s primary aim is backup simplicity – they want to provide a “time machine” like approach for virtual machine workloads.
There has been a bit of confusion recently over the use of OEM ESXi ISO images and Virtual SAN. These OEM ESXi ISO images allow our partners to pre-package a bunch of their own drivers and software components so that you have them available to you immediately on install. While this can be very beneficial for non-VSAN environments, it is not quite so straight-forward for VSAN deployments. Drivers associated with VSAN have to go through extra testing for some very good reasons that I will allude to shortly. The issue really pertains to the drivers that are shipped with many of these ESXi images; in many cases these are the latest and greatest drivers from the OEM for a given storage controller and may not yet be qualified for VSAN (qualified == tested).
My first introduction to X-IO was via Stephen Foskett’s Tech Field Days. They piqued my interest and I added them to the list of storage vendors that I wanted to check out at VMworld 2014. I started to research these guys a little more, and learnt that they are closely related to Xiotech, a SAN company that I dealt with on occasion when I worked in technical support for VMware back in the day. It seems that Xiotech acquired Seagate’s spun-out Advanced Storage Group in 2007. The guys then began to work on a different product to the Xiotech team, namely the Intelligent Storage Element or ISE array. The Xiotech products were discontinued in 2012 (although the name continues to appear on the VMware SAN/Storage HCL), and the focus was placed on the ISE products. I was a bit confused when I saw that X-IO were not listed on the HCL directly, but after checking with Blair Parkhill, VP of Tech Marketing at X-IO, it seems that they still use their incorporated name, Xiotech.