This is a query which has come up on numerous occasions in the past, especially in the comments section of a blog post on debunking SIOC myths on the vSphere Storage Blog. This post is to highlight some recommendations which should be implemented when you have a storage array which presents LUNs which are spread across all spindles, or indeed multiple LUNs all being backed by the same set of spindles from a particular aggregate or storage pool.
I have had a few occasions recently to start using vscsiStats. For those of you who may be unfamiliar, this is a great tool for virtual machine disk I/O workload characterization. Have you ever wondered about the most common I/O size generated by the Guest OS? What about the latency of those I/Os? What about checking to see the I/O generated by a Guest OS when it is in a so-called ‘idle’ state? vscsiStats can help with all of these queries, as well as providing some excellent troubleshooting options. The tool has been around since the ESX 3.5 days. This…
The answer right now is no, but if you are interested in how this query came about, and why I decided to blog about it, continue reading. It has something for those of you interested in some of the underlying workings of Storage DRS.
A number of you have reached out about how to change some of the settings around path policies, in particular how to set the default number of iops in the Round Robin path selection policy (PSP) to 1. While many of you have written scripts to do this, when you reboot the ESXi host, the defaults of the PSP are re-applied and then you have to run the scipts again to reapply the changes. Here I will show you how to modify the defaults so that when you unclaim/reclaim the devices, or indeed reboot the host, the desired settings come…
You may remember an enhancement which we made to Storage I/O Control (SIOC) in the 5.1 vSphere release whereby SIOC can now automatically determine the characteristics and thus the latency threshold of a datastore. Prior to this change, SIOC used either a default value or had customers manually set it. Neither of these were ideal, so we introduced this automatic method. However, there was little detail on how often this latency threshold was calculated. In other words, did the calculation take place when SIOC was first enabled, or is there regular on-going calculations?
I’m delighted to be involved in an upcoming project on vSphere Design Considerations. This is the brain-child of my ex-colleague, Frank Denneman, who recently fu^H^H decided to broaden his horizons and join an interesting new start-up called PernixData. 🙂 Also on the project are Duncan Epping (still a colleague), Jason Nash and the inimitable Vaughn Stewart of NetApp. To cut to the chase, the objective here is to create a pocket-sized book containing the best vSphere design considerations. However, the cool part of this book is that the community are the authors. And the book will be free, courtesy of…
This is something that was recently brought to my attention, and I wasn’t aware of this difference in behavior between the various storage vendors who implement VAAI-NAS. VAAI-NAS implements a number of different offload primitives, but the one we are interested in here is the Fast File Clone primitive which is the ability to offload the creation of snapshots/clones to the NAS storage array. This mechanism is also referred to as Native Snapshots. However, some arrays cannot support a full chain of snapshots.