First off, I’d like to wish everyone a very happy 2014. I’m starting off 2014 with a new role within VMware. After almost 3 years with the VMware Technical Marketing team, I’ve decided to take up a new challenge. As of January 1st, 2014, I am now a Senior Storage Architect in the Integration Engineering team which is part of VMware R&D. This team is also known as Customer[0]. The new Integration Engineering Storage Architect role allows me to works directly with customers/partner organizations, our field staff and R&D to incubate and field enable the next generation of VMware storage…
I’ve been having some interesting discussions with my friends over at NetApp recently. I wanted to learn more about their new clustered Data ONTAP 8.2 features and its new scale-out functionality. In the storage array world, traditional scale-up mechanisms usually involved either replacing disk drives with faster/newer models or replacing old array controllers with newer controllers. In worst case scenarios, fork lift upgrades are required to do a technology refresh of your array. Another approach, scale-out, is fast becoming the accepted way of handling storage requirements going forward. Scale out storage is now big news. With scale-out, you simply add…
This is a topic which has been discussed time and time again. It relates to an advanced storage parameter called Disk.SchedNumReqOutstanding, or DSNRO for short. There are a number of postings out there on the topic, without me getting into the details once again. If you wish to learn more about what this parameter does for you, I recommend reading this post on DSNRO from my good pal Duncan Epping. Suffice to say that this parameter is related to virtual machine I/O fairness. In this post, I’ll talk about changes to DSNRO in vSphere 5.5.
In the Virtual SAN (VSAN) beta refresh, we released a number of new Ruby vSphere Console (RVC) commands to examine the Storage Policy Based Management (SPBM) settings. For those of you who have been participating in the beta, you will know that to deploy a virtual machine on VSAN, you create a storage policy for the virtual machine, which may stipulate the number of mirror copies of the virtual machine disk (FailuresToTolerate) or indeed a stripe width for the VMDK. SPBM is the underlying technology which controls this aspect of VSAN. In this post, we can look at some of…
All Flash Arrays continue to make the news. Whether it is EMC’s XtremIO launch or Violin Memory’s current market woes, there is no doubt that AFAs continue to generate a lot of interest. Those of you interested in flash storage will not need an introduction to SolidFire. These guys were founded by Dave Wright (ex-RackSpace) and have been around since 2009. I have been trying to catch up with SolidFire for sometime as I’d heard their pitch around Quality of Service on a per volume basis and wanted to learn more, especially how it integrated with vSphere features. Recently I…
About a year ago I wrote an article stating that Raw Device Mappings (RDM) continued to rely on LUN IDs, and that if you wished to successfully vMotion a virtual machine with an RDM from one host to another host, you had to ensure that the LUN was presented in a consistent manner (including identical LUN IDs) to every host that you wished to vMotion to. I recently learnt that this restriction has been lifted in vSphere 5.5. To verify, I did a quick test, presenting the same LUN with a different LUN ID to two different hosts, using that…
In a post on the vSphere blog, I spoke about how to use maintenance mode. As a follow on request, a number of people asked me how they should safely shutdown a VSAN cluster. In this post, I will address that question and share my observations. On my three-node VSAN cluster, I had a number of virtual machines as well as a vApp running vCenter Operations Manager VMs. My first step was to shut down all virtual machines in my cluster.