Virtual Volumes (VVols) and Replication/DR

There have been a number of queries around Virtual Volumes (VVols) and replication, especially since the release of KB article 2112039 which details all the interoperability aspects of VVols. In Q1 of the KB, the question is asked “Which VMware Products are interoperable with Virtual Volumes (VVols)?” The response includes “VMware vSphere Replication 6.0.x”. In Q2 of the KB, the question is asked “Which VMware Products are currently NOT interoperable with Virtual Volumes (VVols)?” The response includes “VMware Site Recovery Manager (SRM) 5.x to 6.0.x” In Q4 of the KB, the question is asked “Which VMware vSphere 6.0.x features are…

VSAN 6.0 Part 7 – Blinking those blinking disk LEDs

Before I begin, this isn’t really a feature of VSAN so to speak. In vSphere 6.0, you can also blink LEDs on disk drives without VSAN deployed. However, because of the scale up and scale out features in VSAN 6.0, where you can have very many disk drives and very many ESXi hosts, being able to identify a drive for replacement becomes very important. So this is obviously a useful feature. And of course I wanted to test it out, see how it works, etc. In my 4 node cluster, I started to test this feature on some disks in…

VSAN 6.0 Part 6 – Maintenance Mode Changes

There is a subtle difference in maintenance mode behaviours between VSAN version 5.5 and VSAN version 6.0. In Virtual SAN version 5.5, when a host is placed into maintenance mode with the “Ensure Accessibility” option, the host is maintenance mode continues to contribute its storage towards the VSAN datastore. In other words, any VMs that had components stored on this host still remained fully compliance with all of the components available. In VSAN 6.0, this behaviour changed. Now, when a host is placed into maintenance mode, it no longer contributes storage to the VSAN datastore, and any components that reside…

VSAN 6.0 Part 5 – new vsanSparse snapshots

There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots,…

VSAN 6.0 Part 4 – All-Flash VSAN Capacity Tier Considerations

In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs.  However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.

More Virtual Volumes (VVols) and Snapshots goodness

Well, I got so many questions about my previous articles on a new way of doing snapshots with VVols that I decided to take the time and get even deeper into their behaviour. In this setup, I take a Windows 2008 Guest OS running in a virtual machine  deployed on an NFS datastore, and I compare it to an identical VM deployed on a VVol datastore. This is purely from looking at how we do snapshots. Remember with VVols, snapshots always run on the base disk, compared to the traditional way of doing snapshots where the VM always run on the…

VSAN 6.0 Part 3 – New Default Datastore Policy

One of the most common issues I got questions about in VSAN 5.5 was “why is VSAN deploying thick disks, when all of the documentation stated that VSAN deploys thin disks”? The answer was quite straight forward, and was due to the fact that the VMs were being deployed without a VM Storage Policy. This meant that it went through the standard VM deployment wizard which offered administrators the option of thin, lazy-zeroed thick (LZT) and eager-zeroed thick (EZT). The default option is LZT, which if you just do click-click-click (just like I do) when deploying a VM, then you…