VSAN 6.2 Part 11 – Support for higher FTT with larger VMDKs

In the VSAN 6.0 Design & Sizing Guide, a caveat was placed around the size of a VMDK, and the Number of Failures to Tolerate (FTT) number. It reads like this:

“If the VMDK size is greater than 16TB, then the maximum value for NumberOfFailuresToTolerate is 1.”

I’m pleased to say that this restriction has been lifted in VSAN 6.2.

Continue reading

Compare and Contrast – VSAN and VVols

VVolsEarlier this month I had the opportunity to meet with a number of VMware customers in both Singapore and in the UAE. Most of the sessions were enablement and education type sessions, where there was a lot of white-boarding of VSAN (VMware’s hyper-converged infrastructure product) and Virtual Volumes (VVols – Software Defined Storage or SDS for the storage arrays). This wasn’t a sales session; I’m not in sales. The objective of these sessions was simply to educate. I guess when you are immersed in this stuff 24×7, it easy to fall into the trap of believing that everyone is well versed in this technology, and that’s simply not the case.

captain-vsanWith both virtualization teams and storage teams in the room at the same time, it was important to show the building blocks with each approach, as well as to compare and contrast the advantages of the different storage solutions over the other. As I repeatedly delivered the same session, I thought it might be useful to share my thoughts with a broader audience, in the guise of this blog post.

Continue reading

VSAN.ClomMaxComponentSizeGB explained

In the VSAN Troubleshooting Reference Manual, the following description of VSAN.ClomMaxComponentSizeGB is provided:

By default VSAN.ClomMaxComponentSizeGB is set to 255GB. When Virtual SAN stores virtual machine objects, it creates components whose default size does not exceed 255 GB. If you use physical disks that are smaller than 255GB, then you might see errors similar to the following when you try to deploy a virtual machine:

There is no more space for virtual disk XX. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry.

Continue reading

I/O Scheduler Queues Improvement for Virtual Machines

This is a new feature in vSphere 6.0 that I only recently became aware of. Prior to vSphere 6.0, all the I/Os from a given virtual machine to a particular device would share a single I/O queue. This would result in all the I/Os from the VM (boot VMDK, data VMDK, snapshot delta) queued into a single per-VM, per-device queue. This caused I/Os from different VMDKs interfere with each other and could actually hurt fairness.

For example, if a VMDK was used by a database, and this database issued a lot of I/O, this could compete with I/Os from the boot-disk. This in turn could make it appear that the VM (Guest OS) is running slowly.

Continue reading

Using NexentaConnect for file shares on VSAN

nexenta_partnerlogoI already wrote an article on the NexentaConnect for VSAN product after seeing it in action at VMworld last year. More recently, I had the opportunity to play with it in earnest. Rather than giving you the whole low-down on NexentaConnect, instead I will use this post to show the steps involved in presenting a file share built by NexentaConnect to a VM. In this case, the VM and the file share both reside on Virtual SAN. I will also show you how to simply revert to a point-in-time snapshot of the file share using NexentaConnect. To answer the common question, “can VSAN do file shares as well as storing virtual machines?”, the answer is yes. This post will show you how.

Continue reading

Using HyTrust to encrypt VMDKs on VSAN

hytrustI’ve had an opportunity recently to get some hands-on with HyTrust’s Data Control product to do some data encryption of virtual machine disks in my Virtual SAN 6.0 environment. I won’t deep dive into all of the “bells and whistle” details about HyTrust – my good buddy Rawlinson has already done a tremendous job detailing that in this blog post. Instead I am going to go through a step-by-step example of how to use HyTrust and show how it prevents your virtual machine disk from being snooped. In my case, I am encrypting virtual machine disks from VMs that are deployed on VSAN, as I have had this question in the past, i.e. can VMDKs on VSAN be encrypted? The answer is yes. This post will show you how.

Continue reading

Heads Up! ATS Miscompare detected between test and set HB images

heartbeatI’ve been hit up this week by a number of folks asking about “ATS Miscompare detected between test and set HB images” messages after upgrading to vSphere 5.5U2 and 6.0. The purpose of this post is to give you some background on why this might have started to happen.

First off, ATS is the Atomic Test and Set primitive which is one of the VAAI primitives. You can read all about VAAI primitives in the white paper. HB is short for heartbeat. This is how ownership of a file (e.g VMDK) is maintained on VMFS, i.e. lock. You can read more about heartbeats and locking in this blog post of mine from a few years back. In a nutshell, the heartbeat region of VMFS is used for on-disk locking, and every host that uses the VMFS volume has its own heartbeat region. This region is updated by the host on every heartbeat. The region that is updated is the time stamp, which tells others that this host is alive. When the host is down, this region is used to communicate lock state to other hosts.

In vSphere 5.5U2, we started using ATS for maintaining the heartbeat. Prior to this release, we only used ATS when the heartbeat state changed. For example, referring to the older blog, we would use ATS in the following cases:

  • Acquire a heartbeat
  • Clear a heartbeat
  • Replay a heartbeat
  • Reclaim a heartbeat

We did not use ATS for maintaining the ‘liveness’ of a heartbeat. This is the change that was introduced in 5.5U2 and which appears to have led to issues for certain storage arrays.

Continue reading