SIOC and datastores spread across all spindles in the array

This is a query which has come up on numerous occasions in the past, especially in the comments section of a blog post on debunking SIOC myths on the vSphere Storage Blog. This post is to highlight some recommendations which should be implemented when you have a storage array which presents LUNs which are spread across all spindles, or indeed multiple LUNs all being backed by the same set of spindles from a particular aggregate or storage pool.

Getting started with vscsiStats

I have had a few occasions recently to start using vscsiStats. For those of you who may be unfamiliar, this is a great tool for virtual machine disk I/O workload characterization. Have you ever wondered about the most common I/O size generated by the Guest OS? What about the latency of those I/Os? What about checking to see the I/O generated by a Guest OS when it is in a so-called ‘idle’ state? vscsiStats can help with all of these queries, as well as providing some excellent troubleshooting options. The tool has been around since the ESX 3.5 days. This…

Automating the IOPS setting in the Round Robin PSP

A number of you have reached out about how to change some of the settings around path policies, in particular how to set the default number of iops in the Round Robin path selection policy (PSP) to 1. While many of you have written scripts to do this, when you reboot the ESXi host, the defaults of the PSP are re-applied and then you have to run the scipts again to reapply the changes. Here I will show you how to modify the defaults so that when you unclaim/reclaim the devices, or indeed reboot the host, the desired settings come…

Proximal Data introduces Autocache 1.1 – Guest OS Flash Acceleration

Those of you attending VMUG (VMware User Group) meetings in the US recently may have come across the guys from Proximal Data. They were at the Austin & Silicon Valley VMUGs & I believe they may even have had the key-note at the San Diego VMUG. I had the pleasure of meeting up with Rich Pappas (VP of Sales and Business Development) and storage veteran Rory Bolt (CEO) at VMware’s Partner Exchange this year. They gave me an overview of their new Autocache 1.1 features.

Raxco introduces PerfectStorage – Guest OS Space Reclaim

I was first introduced to Raxco Software when I wrote an article on the vSphere Storage Blog related to fragmentation on Guest OS file systems. In that post, I wanted to highlight some side effects of running a defragment operation on the file system in the Guest OS (actually, primarily the Windows defragger). Raxco reached out to say that they had a product that would actually prevent fragmentation occurring in the first place, which was rather neat I thought. Bob Nolan, Raxco’s CEO reached out to me again recently to let me know about a new product that they were…

Heads Up! Device Queue Depth on QLogic HBAs

Just thought I’d bring to your attention something that has been doing the rounds here at VMware recently, and will be applicable to those of you using QLogic HBAs with ESXi 5.x. The following are the device queue depths you will find when using QLogic HBAs for SAN connectivity: ESXi 4.1 U2 – 32 ESXi 5.0 GA – 64 ESXi 5.0 U1 – 64 ESXi 5.1 GA – 64 The higher depth of 64 has been this way since 24 Aug 2011 (the 5.0 GA release). The issue is that this has not been documented anywhere. For the majority of…

Tintri 2.0 – Per VM Replication Feature

Last week, I had a chance to catch up with Brady Murray and Rex Walters of Tintri. Mostly this was a transfer of information, but the guys let me know that they are on the verge of announcing a new per-VM replication feature which they first demoed to me when I met Tintri at VMworld last year. This will be the main feature in Tintri’s new 2.0 launch.