VSAN.ClomMaxComponentSizeGB explained

In the VSAN Troubleshooting Reference Manual, the following description of VSAN.ClomMaxComponentSizeGB is provided:

By default VSAN.ClomMaxComponentSizeGB is set to 255GB. When Virtual SAN stores virtual machine objects, it creates components whose default size does not exceed 255 GB. If you use physical disks that are smaller than 255GB, then you might see errors similar to the following when you try to deploy a virtual machine:

There is no more space for virtual disk XX. You might be able to continue this session by freeing disk space on the relevant volume and clicking retry.

Continue reading

Losing the VASA Provider and/or vCenter Server in VVols

VVolsWith the release of vSphere 6.0 earlier this year, VMware introduced the eagerly anticipated VVols or Virtual Volumes. As we see more and more traction around VVols, a specific question has come up a number of times already. The question is basically: “What happens to VVols if I lose my VASA Provider or my vCenter Server, or indeed both of these components? Will I still have access to my devices?”.

Continue reading

A new vRealize Log Insight Content Pack for VSAN

loginsight-logoAttention VSAN users. A new Log Insight content pack has just been released specifically for Virtual SAN. For those of you not familiar with Log Insight, this product does automated log management through log analytics, aggregation and search. It allows administrators to analyze terabytes of logs, perform smart parsing to discover structure in unstructured data, and enable interactive, real-time search and analytics through a GUI-based, easy to use interface.

Continue reading

Some upcoming speaking engagements

A short post to let you know about some upcoming speaking engagements that I am doing over the next couple of weeks.

techugFirst up, I will be speaking at the TechUG, or Technology User Group event next week. This event will be held on Thursday, November 26th. It will be held in the Westin Hotel in the heart of Dublin city, Ireland. There is a really good agenda for this event (which is not a VMware centric event), that you can find at this link here. I personally will be speaking about Virtual SAN (VSAN), VMware’s hyper-converged compute and storage platform. This will be more of an introductory type session, but I’ll also be giving an overview of new and upcoming features and where we are thinking about going next with VSAN. You can find the Dublin TechUG registration link here.

vmug-logoMy next session is at the VMUGDK Usercon or Nordics Usercon, which will be held on Tuesday, December 1st. This event will take place at the Scandic Hotel in Copenhagen, Denmark. This year I will return to my roots and talk about core vSphere storage enhancements over the past few releases, and also a look at some upcoming  plans. No VSAN, VVol or anything like that – this will be a discussion on VMFS, NFS, VAAI, PSA, etc. The Nordic UserCon details can be found at this link here. The registration link is at the same location.

If you are in the Dublin or Copenhagen area for any of these events, I’d love to see you there. I plan to spend most of the day at both events, so if there are any VSAN or vSphere storage questions or feedback that you’d like to give me, I’d be delighted to talk with you in person.

I/O Scheduler Queues Improvement for Virtual Machines

This is a new feature in vSphere 6.0 that I only recently became aware of. Prior to vSphere 6.0, all the I/Os from a given virtual machine to a particular device would share a single I/O queue. This would result in all the I/Os from the VM (boot VMDK, data VMDK, snapshot delta) queued into a single per-VM, per-device queue. This caused I/Os from different VMDKs interfere with each other and could actually hurt fairness.

For example, if a VMDK was used by a database, and this database issued a lot of I/O, this could compete with I/Os from the boot-disk. This in turn could make it appear that the VM (Guest OS) is running slowly.

Continue reading

VSAN Design & Sizing – Memory overhead considerations

This week I was in Berlin for our annual Tech Summit in EMEA. This is an event for our field folks in EMEA. I presented a number of VSAN sessions, including a design and sizing session. As part of that session, the topic of VSAN memory consumption was raised. In the past, we’ve only ever really talked about the host memory requirements for disk group configuration as highlighted in this post here. For example, as per the post, to a run a fully configured Virtual SAN system, with 5 fully populated disk groups per host, and 7 disks in each disk group, a minimum of 32GB of host memory is needed. This is not memory consumed by VSAN by the way. This memory may also be used to run workloads. Consider it as a configuration limit if you will. As per the post above, if hosts have less than 32GB of memory, then we scale back on the number of disk groups that can be created on the host.

To the best of my knowledge, we never shared information about what contributes to memory consumption on VSAN clusters. That is what I plan to talk about in this post.

Continue reading

SMP-FT support on Virtual SAN

There have been a number of questions recently about SMP-FT on Virtual SAN. The Symmetric Multi-Processing Fault Tolerance (SMP-FT) is a feature that many VMware customers have been waiting for. With the release of vSphere 6.0, the SMP-FT capability  finally became available. This release did not include SMP-FT support when the VM was run on VSAN however. With the release of vSphere 6.0U1, which included VSAN 6.1, there is now support for SMP-FT when the VM is run on VSAN.

There are some caveats when it comes to the different VSAN deployment methodologies:

  1. On standard VSAN deployments, SMP-FT is supported
  2. On ROBO/2-node VSAN deployments, SMP-FT is supported (announced at VMworld 2015 in Barcelona)
  3. On stretched cluster VSAN deployments, SMP-FT is not supported. The latency distances (up to 5 msec RTT) are simply too great for fault tolerance

Another common question is whether the remote witness appliance for both stretched cluster and 2-node/ROBO deployments can be protected by SMP-FT on a remote host and remote datastores. While this could work in theory, it has not been tested. The general consensus is that vSphere HA should be enough to protect the witness appliance.

With this in mind, lets see how one would go about configuring SMP-FT on a standard VSAN deployment.

Continue reading