A closer look at Rubrik

rubrik2A couple of months back, I wrote a short article on Rubrik. They were just coming out of stealth mode and had started an early access program. Since they had not officially launched, there wasn’t a lot that I was allowed to say about the company, other than give a high level overview. As they have now officially launched their r300 series of products, along with news of a massive $41 million Series B of funding, I can now share some additional details about their products and technology. Just to recap on what Rubrik do, they are offering a converged and scale-out backup software and backup storage appliance. The Rubrik appliance (Brik) is a “rack and go” architecture, with the ability to scale from three to thousands of nodes (unlimited) using industry standard 2U commodity appliance hardware.

The whole pitch is the idea that “backups suck”, and they want to give administrators a much better back and restore experience, similar to Apple’s ‘Time Machine’ feature.

Continue reading

VAAI now available with vSphere Standard Edition

A short post today, but it highlights what I feel is an important enhancement to vSphere licensing. I’ve had lots of questions recently about why VAAI (Storage APIs for Array Integration) is not available in the standard edition of vSphere. This is especially true since I began posting about Virtual Volumes earlier this year, and it was clear that Virtual Volumes is available in the standard edition. One reason why this was confusing is that if a migration of a VVol could not be handled by the array using the VASA APIs, the migration would fall back to using VAAI offload primitives. But if you only had standard licensing for VVols, would you still be supported?

Continue reading

vSphere 6.0 HA and Component Protection with vMSC

I had a query recently about changes to vSphere 6.0, especially when it comes to vSphere HA and Component Protection (VMCP) with vMSC, vSphere Metro Storage Cluster. The question is very straight forward – do all the same advanced setting recommendations for PDL and APD apply to vMSC on vSphere 6.0 as they did for vSphere 5.5? Or do we have some new recommendations now around PDL and APD for vMSC with the introduction of VMCP in vSphere 6.0?

Continue reading

VSAN 6.0 Part 10 – 10% cache recommendation for AF-VSAN

AF-VSAN-BWith the release of VSAN 6.0, and the new all-flash configuration (AF-VSAN), I have received a number of queries around our 10% cache recommendation. The main query is, since AF-VSAN no longer requires a read cache, can we get away with a smaller write cache/buffer size?

Before getting into the cache sizing, it is probably worth beginning this post with an explanation about the caching algorithm changes between version 5.5 and 6.0. In VSAN 5.5, which came as a hybrid configuration only with a mixture of flash and spinning disk, cache behaved as both a write buffer (30%) and read cache (70%). If a read request was not satisfied by the cache, in other words there was a read cache miss, then the data block was retrieved from the capacity layer. This was an expensive operation, especially in terms of latency, so the guideline was to keep your working set in cache as much as possible. Since the majority of virtualized applications have a working set somewhere in the region of 10%, this was where the cache size recommendation of 10% came from. With hybrid, there is regular destaging of data blocks from write cache to spinning disk. This is a proximal algorithm, which looks to destage data blocks that are contiguous (adjacent to one another). This speeds up the destaging operations.

Continue reading

vSphere 6.0 Storage Features Part 8: VAAI UNMAP changes

recycle2A few weeks, my good pal Cody Hosterman over at Pure Storage was experimenting with VAAI and discovered that he could successfully UNMAP blocks (reclaim) directly from a Guest OS in vSphere 6.0. VAAI are the vSphere APIs for Array Integration. Cody wrote about his findings here. Effectively, if you have deleted files within a Guest OS, and your VM is thinly provisioned, you can tell the array through this VAAI primitive that you are no longer using these blocks. This allows the array to reclaim them for other uses. I know a lot of you have been waiting for this functionality for some time. However Cody had a bunch of questions and reached out to me to see if I could provide some answers. After conversing with a number of engineers and product managers here at VMware, here are some of the answers to the questions that Cody asked.

Continue reading

Announcing the Virtual SAN 6.0 Health Check Plugin

health-checkToday VMware announces the Virtual SAN 6.0 Health Check Plugin, a feature that will check your Virtual SAN configuration, both proactively and re-actively, and highlight any abnormal conditions found in the cluster. This is available to all our VSAN customers right now. Not only does it check the health of the cluster, but it also checks the state of the network, host connectivity, physical disk status, and underlying virtual machine object state. This is a great tool for ensuring that an initial deployment of VSAN or proof-of-concept has been rolled out successful, giving you confidence in your VSAN deployment. It is also useful for ongoing monitoring and maintenance of your Virtual SAN cluster.

Continue reading

vSphere 6.0 Storage Features Part 7: VAAI XCOPY improvements

The more astute of you who have already moved to vSphere 6.0, and like looking at CLI outputs, may have observed some new columns/fields in the PSA claimrules when you run the following command:

# esxcli storage core claimrule list --claimrule-class=VAAI

The new fields are as follows (slide right to view full output):

XCOPY Use Array     XCOPY Use              XCOPY Max
Reported Values     Multiple Segments      Transfer Size 
---------------     -----------------      -------------- 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0

Continue reading