Automating the IOPS setting in the Round Robin PSP

A number of you have reached out about how to change some of the settings around path policies, in particular how to set the default number of iops in the Round Robin path selection policy (PSP) to 1. While many of you have written scripts to do this, when you reboot the ESXi host, the defaults of the PSP are re-applied and then you have to run the scipts again to reapply the changes. Here I will show you how to modify the defaults so that when you unclaim/reclaim the devices, or indeed reboot the host, the desired settings come…

Storage I/O Control – Workload Injector Behaviour

You may remember an enhancement which we made to Storage I/O Control (SIOC) in the 5.1 vSphere release whereby SIOC can now automatically determine the characteristics and thus the latency threshold of a datastore. Prior to this change, SIOC used either a default value or had customers manually set it. Neither of these were ideal, so we introduced this automatic method. However, there was little detail on how often this latency threshold was calculated. In other words, did the calculation take place when SIOC was first enabled, or is there regular on-going calculations?

VAAI-NAS – Some snapshot chains are deeper than others

This is something that was recently brought to my attention, and I wasn’t aware of this difference in behavior between the various storage vendors who implement VAAI-NAS. VAAI-NAS implements a number of different offload primitives, but the one we are interested in here is the Fast File Clone primitive which is the ability to offload the creation of snapshots/clones to the NAS storage array. This mechanism is also referred to as Native Snapshots. However, some arrays cannot support a full chain of snapshots.

What happens when VMFS heap depletes completely?

I’ve blogged about the VMFS heap situation numerous times now already. However, a question that I frequently get asked is what actual happens when heap runs out? I thought I’d put together a short article explaining the symptoms one would see when there is no VMFS heap left on an ESXi host. Thanks once again to my good friend and colleague, Paudie O’Riordan, for sharing his support experiences with me on this matter – “together we win”, right Paud?

Heads Up! EMC VNX, Recoverpoint and VAAI

I just got a notification about this myself today. Apparently there is some interoperability issues with VAAI (vSphere APIs for Array Integration) & EMC RecoverPoint on EMC VNX arrays. It looks like the VNX Storage Processor (SP) may reboot with Operating Environment Release 32 P204 in a RecoverPoint environment. EMC have just released today a technical advisory – ETA emc327099 – which describes the issue in more detail but is basically advising customers to disable VAAI on all ESXi hosts in the RecoverPoint environment while they figure this out. Hopefully it won’t take too long to come up with a…

Heads Up! UNMAP considerations when reclaiming more than 2TB

Thanks to our friends over at EMC (shout out to Itzik), we’ve recently been made aware of a limitation on our UNMAP mechanism in ESXi 5.0 & 5.1. It would appear that if you attempt to reclaim more than 2TB of dead space in a single operation, the UNMAP primitive is not handling this very well. The current thought is that this is because we have a 2TB (- 512 byte) file size limit on VMFS-5. When the space to reclaim is above this size, we cannot create the very large temporary balloon file (part of the UNMAP process), and…

Proximal Data introduces Autocache 1.1 – Guest OS Flash Acceleration

Those of you attending VMUG (VMware User Group) meetings in the US recently may have come across the guys from Proximal Data. They were at the Austin & Silicon Valley VMUGs & I believe they may even have had the key-note at the San Diego VMUG. I had the pleasure of meeting up with Rich Pappas (VP of Sales and Business Development) and storage veteran Rory Bolt (CEO) at VMware’s Partner Exchange this year. They gave me an overview of their new Autocache 1.1 features.