VSAN 6.0 Part 6 – Maintenance Mode Changes

maintenanceThere is a subtle difference in maintenance mode behaviours between VSAN version 5.5 and VSAN version 6.0. In Virtual SAN version 5.5, when a host is placed into maintenance mode with the “Ensure Accessibility” option, the host is maintenance mode continues to contribute its storage towards the VSAN datastore. In other words, any VMs that had components stored on this host still remained fully compliance with all of the components available. In VSAN 6.0, this behaviour changed. Now, when a host is placed into maintenance mode, it no longer contributes storage to the VSAN datastore, and any components that reside on the physical storage of the host that is placed into maintenance mode is marked as absent. The following screen shots show the behaviour.

Continue reading

VSAN 6.0 Part 5 – new vsanSparse snapshots

There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots, speaks for itself.

This new vsanSparse snapshot format leverages features of the new (v2) on-disk format in VSAN 6.0, VirstoFS. VirstoFS is the first implementation of technology that was acquired when VMware bought a company called Virsto a number of years ago. You can get an overview of this company from this blog post I did prior to the acquisition.

Continue reading

VSAN 6.0 Part 4 – All-Flash VSAN Capacity Tier Considerations

In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs.  However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.

Continue reading

A quick introduction to Rubrik

rubrikI first encountered Rubrik at this year’s Partner Exchange (PEX) 2015 in San Francisco. They had some promotional flyers made up labeled “Backup Still Sucks”. I guess a lot of people can relate to that. I had a chat with Julia Lee, who used to be a storage product marketing manager here at VMware, but recently moved to Rubrik. Rubrik’s pitch is that customers are currently stitching together backup software with backup storage in order to backup their virtual infrastructures – there is no seamless integration. Rubrik’s primary aim is backup simplicity – they want to provide a “time machine” like approach for virtual machine workloads.

Continue reading

VSAN 6.0 Part 3 – New Default Datastore Policy

One of the most common issues I got questions about in VSAN 5.5 was “why is VSAN deploying thick disks, when all of the documentation stated that VSAN deploys thin disks”?

The answer was quite straight forward, and was due to the fact that the VMs were being deployed without a VM Storage Policy. This meant that it went through the standard VM deployment wizard which offered administrators the option of thin, lazy-zeroed thick (LZT) and eager-zeroed thick (EZT). The default option is LZT, which if you just do click-click-click (just like I do) when deploying a VM, then you end up deploying a LZT format VM, even on the VSAN datastore. I described this issue in this older blog post. Its only when you select an actual VM Storage Policy when deploying a VM that VSAN uses the Object Space Reservation capability, which by default is 0%, meaning that the VM is effectively thinly provisioned. We realized that this was causing some issues for customers so we improved this whole deployment mechanism in 6.0 with the introduction of Datastore Default policies.

Continue reading

VSAN 6.0 Part 2 – v2 On-disk Format Upgrade Considerations

I was heavily involved in the documentation effort for VSAN 6.0, but I know that not everyone likes to RTFM, so to speak. What I thought I would do in this post is give an overview of the upgrade process, and highlight some considerations. But I really would urge you to read through the VSAN 6.0 Administrators Guide, and perhaps the VSAN Troubleshooting Reference Manual, especially the sections dealing with upgrades, if you do plan to upgrade from VSAN 5.5 to 6.0. There is  a lot of useful information there.

There are four steps to the upgrade process:

  1. Upgrading vCenter Server to 6.0
  2. Upgrading the ESXi hosts to 6.0
  3. Upgrading the on-disk filesystem format from v1 to v2 (VMFS-L to VirstoFS)
  4. Upgrading the components to v2

Items 1 & 2 are outside the scope of this post. Refer to the generic vSphere 6 documentation on how to do those. Items 3 & 4 are done via a new RVC command that we will discuss in more detail here.

Continue reading

VSAN 6.0 Part 1 – New quorum mechanism

vSphere 6.0 released yesterday. It included the new version of Virtual SAN – 6.0. I now wish to start sharing some of the new features and functionality with you. One of things we always enforced with version 5.5 was the fact that when you deployed a VM with NumberOfFailuresToTolerate = 1, you always had at least 3 components: 1st copy of the data, 2nd copy of the data, and then a witness component for quorum. In version 5.5, for a VM to remain accessible, “one full copy of the data and more than 50% of components must be available”. We have introduced some subtle differences around quorum and VM accessibility to version 6.0.

Continue reading