Some upcoming speaking engagements

A short post to let you know about some upcoming speaking engagements that I am doing over the next couple of weeks.

techugFirst up, I will be speaking at the TechUG, or Technology User Group event next week. This event will be held on Thursday, November 26th. It will be held in the Westin Hotel in the heart of Dublin city, Ireland. There is a really good agenda for this event (which is not a VMware centric event), that you can find at this link here. I personally will be speaking about Virtual SAN (VSAN), VMware’s hyper-converged compute and storage platform. This will be more of an introductory type session, but I’ll also be giving an overview of new and upcoming features and where we are thinking about going next with VSAN. You can find the Dublin TechUG registration link here.

vmug-logoMy next session is at the VMUGDK Usercon or Nordics Usercon, which will be held on Tuesday, December 1st. This event will take place at the Scandic Hotel in Copenhagen, Denmark. This year I will return to my roots and talk about core vSphere storage enhancements over the past few releases, and also a look at some upcoming  plans. No VSAN, VVol or anything like that Рthis will be a discussion on VMFS, NFS, VAAI, PSA, etc. The Nordic UserCon details can be found at this link here. The registration link is at the same location.

If you are in the Dublin or Copenhagen area for any of these events, I’d love to see you there. I plan to spend most of the day at both events, so if there are any VSAN or vSphere storage questions or feedback that you’d like to give me, I’d be delighted to talk with you in person.

I/O Scheduler Queues Improvement for Virtual Machines

This is a new feature in vSphere 6.0 that I only recently became aware of. Prior to vSphere 6.0, all the I/Os from a given virtual machine to a particular device would share a single I/O queue. This would result in all the I/Os from the VM (boot VMDK, data VMDK, snapshot delta) queued into a single per-VM, per-device queue. This caused I/Os from different VMDKs interfere with each other and could actually hurt fairness.

For example, if a VMDK was used by a database, and this database issued a lot of I/O, this could compete with I/Os from the boot-disk. This in turn could make it appear that the VM (Guest OS) is running slowly.

Continue reading

VSAN Design & Sizing – Memory overhead considerations

This week I was in Berlin for our annual Tech Summit in EMEA. This is an event for our field folks in EMEA. I presented a number of VSAN sessions, including a design and sizing session. As part of that session, the topic of VSAN memory consumption was raised. In the past, we’ve only ever really talked about the host memory requirements for disk group configuration as highlighted in this post here. For example, as per the post, to a run a fully configured Virtual SAN system, with 5 fully populated disk groups per host, and 7 disks in each disk group, a minimum of 32GB of host memory is needed. This is not memory consumed by VSAN by the way. This memory may also be used to run workloads. Consider it as a configuration limit if you will. As per the post above, if hosts have less than 32GB of memory, then we scale back on the number of disk groups that can be created on the host.

To the best of my knowledge, we never shared information about what contributes to memory consumption on VSAN clusters. That is what I plan to talk about in this post.

Continue reading

SMP-FT support on Virtual SAN

There have been a number of questions recently about SMP-FT on Virtual SAN. The Symmetric Multi-Processing Fault Tolerance (SMP-FT) is a feature that many VMware customers have been waiting for. With the release of vSphere 6.0, the SMP-FT capability  finally became available. This release did not include SMP-FT support when the VM was run on VSAN however. With the release of vSphere 6.0U1, which included VSAN 6.1, there is now support for SMP-FT when the VM is run on VSAN.

There are some caveats when it comes to the different VSAN deployment methodologies:

  1. On standard VSAN deployments, SMP-FT is supported
  2. On ROBO/2-node VSAN deployments, SMP-FT is supported (announced at VMworld 2015 in Barcelona)
  3. On stretched cluster VSAN deployments, SMP-FT is not supported. The latency distances (up to 5 msec RTT) are simply too great for fault tolerance

Another common question is whether the remote witness appliance for both stretched cluster and 2-node/ROBO deployments can be protected by SMP-FT on a remote host and remote datastores. While this could work in theory, it has not been tested. The general consensus is that vSphere HA should be enough to protect the witness appliance.

With this in mind, lets see how one would go about configuring SMP-FT on a standard VSAN deployment.

Continue reading

VSAN resync behaviour when failed component recovers

I had this question a number of times now. Those of you familiar with VSAN will know that if a component goes absent for a period of 60 minutes (default) then VSAN will begin rebuilding a new copy of the component elsewhere in the cluster (if resources allow it). The question then is, if the missing/absent/failed component recovers and becomes visible to VSAN once again, what happens? Will we throw away the component that was just created, or will we throw away the original component that recovered?

Continue reading

Common VSAN health check issues and resolutions

health-checkA number of customers have experienced some issues with getting the Virtual SAN (VSAN) health check to work correctly in their environments. The most common issues have been permissions and certificates. In this post, I want to highlight these issues and any associated KB articles, and call out the symptom as well as the resolution.

Continue reading

VSAN Proactive Rebalance not starting

scalesSome time back I wrote about proactive rebalancing, a new feature of VSAN 6.0. However I have had a number of queries recently about its functionality. The most common query is that when the proactive rebalance operation is started, there doesn’t appear to be any rebuild/resync activity, even though the command output lists a number of disks that need to be rebalanced (rebalancing moves components between physical disks so that each disk is equally consumed).

Continue reading