The value of Virtual Volumes (VVols)

VVolsRegular readers will know that I normally blog about the technical aspects of storage, as opposed to doing opinion pieces. However there have been a number of articles published recently questioning the value of VMware’s Virtual Volumes, commonly referred to as VVols. In general, the pieces I have read ask whether or not VVols (or to be more accurate, per-VM granularity feature of VVols) adds value when NFS is already doing per-VM granularity in the form of files. The point that was missed in these pieces is that VVols is so much more than per-VM granularity. I’ve just come back from some great VMUG events in Frankfurt, Germany and Warsaw, Poland where I presented on the value of VVols to our users. I therefore thought it opportune to post about the other benefits of virtual volumes.
Continue reading

VSAN 6.0 Part 9 – Proactive Re-balance

scalesThis is another nice new feature of Virtual SAN 6.0. It basically is a directive to VSAN to start re-balancing components belonging to virtual machine objects around all the hosts and all the disks in the cluster. Why might you want to do this? Well, it’s very simple. As VMs are deployed on the VSAN datastore, there are algorithms in place to place those components across the cluster in a balanced fashion. But what if a hosts was placed into maintenance mode, and you requested that the data on the host be evacuated prior to entering maintenance mode, and now you are bringing this node back into the cluster after maintenance? What about adding new disks or disk groups to an existing node in the cluster (scaling up)? What if you are introducing a new node to the cluster (scaling out)? The idea behind proactive re-balance is to allow VSAN to start consuming these newly introduced resources sooner rather than later.

Continue reading

VSAN 6.0 Part 6 – Maintenance Mode Changes

maintenanceThere is a subtle difference in maintenance mode behaviours between VSAN version 5.5 and VSAN version 6.0. In Virtual SAN version 5.5, when a host is placed into maintenance mode with the “Ensure Accessibility” option, the host is maintenance mode continues to contribute its storage towards the VSAN datastore. In other words, any VMs that had components stored on this host still remained fully compliance with all of the components available. In VSAN 6.0, this behaviour changed. Now, when a host is placed into maintenance mode, it no longer contributes storage to the VSAN datastore, and any components that reside on the physical storage of the host that is placed into maintenance mode is marked as absent. The following screen shots show the behaviour.

Continue reading

VSAN 6.0 Part 5 – new vsanSparse snapshots

There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots, speaks for itself.

This new vsanSparse snapshot format leverages features of the new (v2) on-disk format in VSAN 6.0, VirstoFS. VirstoFS is the first implementation of technology that was acquired when VMware bought a company called Virsto a number of years ago. You can get an overview of this company from this blog post I did prior to the acquisition.

Continue reading

VSAN 6.0 Part 4 – All-Flash VSAN Capacity Tier Considerations

In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs.  However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.

Continue reading

A quick introduction to Rubrik

rubrikI first encountered Rubrik at this year’s Partner Exchange (PEX) 2015 in San Francisco. They had some promotional flyers made up labeled “Backup Still Sucks”. I guess a lot of people can relate to that. I had a chat with Julia Lee, who used to be a storage product marketing manager here at VMware, but recently moved to Rubrik. Rubrik’s pitch is that customers are currently stitching together backup software with backup storage in order to backup their virtual infrastructures – there is no seamless integration. Rubrik’s primary aim is backup simplicity – they want to provide a “time machine” like approach for virtual machine workloads.

Continue reading

VSAN 6.0 Part 3 – New Default Datastore Policy

One of the most common issues I got questions about in VSAN 5.5 was “why is VSAN deploying thick disks, when all of the documentation stated that VSAN deploys thin disks”?

The answer was quite straight forward, and was due to the fact that the VMs were being deployed without a VM Storage Policy. This meant that it went through the standard VM deployment wizard which offered administrators the option of thin, lazy-zeroed thick (LZT) and eager-zeroed thick (EZT). The default option is LZT, which if you just do click-click-click (just like I do) when deploying a VM, then you end up deploying a LZT format VM, even on the VSAN datastore. I described this issue in this older blog post. Its only when you select an actual VM Storage Policy when deploying a VM that VSAN uses the Object Space Reservation capability, which by default is 0%, meaning that the VM is effectively thinly provisioned. We realized that this was causing some issues for customers so we improved this whole deployment mechanism in 6.0 with the introduction of Datastore Default policies.

Continue reading