Site icon CormacHogan.com

I/O Scheduler Queues Improvement for Virtual Machines

This is a new feature in vSphere 6.0 that I only recently became aware of. Prior to vSphere 6.0, all the I/Os from a given virtual machine to a particular device would share a single I/O queue. This would result in all the I/Os from the VM (boot VMDK, data VMDK, snapshot delta) queued into a single per-VM, per-device queue. This caused I/Os from different VMDKs interfere with each other and could actually hurt fairness.

For example, if a VMDK was used by a database, and this database issued a lot of I/O, this could compete with I/Os from the boot-disk. This in turn could make it appear that the VM (Guest OS) is running slowly.

With this change in vSphere 6.0, each of the VMDK disks on a given device is given a separate scheduler queue. The storage stack now provides bandwidth controls to tune each of these queues for fair I/O sharing and traffic shaping.

This change is currently enabled by default in vSphere 6.0. It is controlled by a boot-time flag which can be disabled if required. If you need to turn it off, you can do this by adjusting the following parameter in the advanced system settings:

VMkernel.Boot.isPerFileSchedModelActive 

This enhancement is currently only available for VMFS. It is not available for NFS at this time.

One area where we have observed significant performance improvement with this new scheduling mechanism is with snapshots. A 4TB snapshot can now be completed under 1/3rd of the time previously taken, while continuing to be fair to the guest OS I/Os.

Exit mobile version