vSphere 5.5 Storage Enhancements Part 10 – 16Gb E2E FC Support

A short and sweet post today. In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, VMware supported these 16Gb HBAs running at 16Gb. However, an important point to note is that there was no support for full end-to-end 16Gb connectivity from host to array in vSphere 5.1. To get full bandwidth, you possible had to configure a number of 8Gb connections  from the switch to the storage array. With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel. Get notification of…

VSAN Part 14 – Host Memory Requirements

For those of you participating in the VMware Virtual SAN (VSAN) beta, this is a reminder that there is a VSAN Design & Sizing Guide available on the community forum. It is part of the Virtual SAN (VSAN) Proof of Concept (POC) Kit, and can be found by clicking this link here. The guide has recently been updated to include some Host Memory Requirements as we got this query from a number of customers participating in the beta. The actual host memory requirement directly related to the number of physical disks in the host and the number of disk groups…

vSphere 5.5. Storage Enhancements Part 9 – PDL AutoRemove

We at VMware have been making considerable changes to the way that the All Paths Down (APD for short) and PDL (Permanent Device Loss) conditions are handled. In vSphere 5.1, we introduced a number of enhancements around APD, including timeouts for devices that entered into the APD state. I wrote about the vSphere 5.1 APD improvements here. In vSphere 5.5 we introduced yet another improvement to this mechanism, namely the automatic removal of devices which have entered the PDL state from the ESXi host.

QLogic – Execution Throttle Feature Concerns

I had a customer reach out to me recently to discuss VMware’s Storage I/O Control behavior and Adaptive Queuing behavior and how it works with QLogic’s Execution Throttle feature. To be honest, I didn’t have a good understanding of the Execution Throttle mechanism from QLogic so I did a little research to see  if this feature inter-operates with VMware’s own I/O congestion management features.

Storage DRS Default VM Affinity Setting

[Updated] This is a very short post as I only learnt about this recently myself. I thought it was only available in vSphere 5.5 but it appears to be in vSphere 5.1 too. Anyhow Storage DRS now has a new setting that allows you to configure the default VM affinity setting. Historically, VMDKs from the same virtual machine were always kept together on the same datastore by default; you had to set a VMDK anti-affinity rule to keep them apart. Now you can set a default for this option, which can either be to keep VMDKs together on the same…

VSAN Part 13 – Examining the .vswp object

I’ve seen a few question recently around the .vswp file on virtual machines. The .vswp or VM swap is one of the objects that go to make up the set of virtual machine objects on the VSAN datastore, along with the VM Home namespace, VMDKs and snapshot delta. The reason for the question is that people do not see the .vswp file represented in the list of virtual machine objects in the UI. The follow-on question inevitably is then around how do you see the policy and resource consumption of a virtual machine’s .vswp object.

YANRBP – Yet Another New Role Blog Post

First off, I’d like to wish everyone a very happy 2014. I’m starting off 2014 with a new role within VMware. After almost 3 years with the VMware Technical Marketing team, I’ve decided to take up a new challenge. As of January 1st, 2014, I am now a Senior Storage Architect in the Integration Engineering team which is part of VMware R&D. This team is also known as Customer[0]. The new Integration Engineering Storage Architect role allows me to works directly with customers/partner organizations, our field staff and R&D to incubate and field enable the next generation of VMware storage…