Supported network topologies for VSAN stretched cluster

As part of the Virtual SAN 6.1 announcements at VMworld 2015, possibly the most eagerly anticipated announcement was the support for a VSAN stretched cluster configuration. Now VSAN can protect your virtual machine across data centers, not just across racks (which was achievable with fault domains introduced in VSAN 6.0). I’ve been hearing requests from customers to support this since the initial VSAN beta, so it is definitely a welcome addition to the supported configurations. The obvious next question is how do I set it up. Well, first of all, you will need to make sure that you have a…

Adventures in VIO, VMware Integrated Openstack

I’ll start this post by stating straight up that I am no OpenStack expert. Far from it. In fact, the only reason I started to play with VMware Integrated OpenStack (VIO) was to get up to speed for a forthcoming OpenStack class that I am taking in next week. What I’ve documented here is a bunch of issues I ran into during the VIO deployment process. Hopefully these will prove useful to some folks who are also new to OpenStack and plan on going through the same exercise. I’m not going to describe VIO in any detail, nor any of…

2014 VMware Fling Contest – Call For Entries

It’s that time of year once again. The 2014 VMware Fling Contest is now open. Do you have an idea on how certain features or functionality could be improved upon? Can you think of an app that would make the life of a system administrator so much easier? Do you have a repetitive task that you wished you could have automated in your vSphere environment? Or a decision-making tool for certain tasks? We are looking for you, our customers & users, to propose ideas for new VMware Flings.  Our panel of judges will pick the winner.  Previous winners include the…

vSphere 5.5 Storage Enhancements Part 10 – 16Gb E2E FC Support

A short and sweet post today. In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, VMware supported these 16Gb HBAs running at 16Gb. However, an important point to note is that there was no support for full end-to-end 16Gb connectivity from host to array in vSphere 5.1. To get full bandwidth, you possible had to configure a number of 8Gb connections  from the switch to the storage array. With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel. Get notification of…

vSphere 5.5. Storage Enhancements Part 9 – PDL AutoRemove

We at VMware have been making considerable changes to the way that the All Paths Down (APD for short) and PDL (Permanent Device Loss) conditions are handled. In vSphere 5.1, we introduced a number of enhancements around APD, including timeouts for devices that entered into the APD state. I wrote about the vSphere 5.1 APD improvements here. In vSphere 5.5 we introduced yet another improvement to this mechanism, namely the automatic removal of devices which have entered the PDL state from the ESXi host.

Tweet sized vSphere Design Considerations – Call for Entries

I’m delighted to be involved in an upcoming project on vSphere Design Considerations. This is the brain-child of my ex-colleague, Frank Denneman, who recently fu^H^H decided to broaden his horizons and join an interesting new start-up called PernixData. 🙂 Also on the project are Duncan Epping (still a colleague), Jason Nash and the inimitable Vaughn Stewart of NetApp. To cut to the chase, the objective here is to create a pocket-sized book containing the best vSphere design considerations. However, the cool part of this book is that the community are the authors. And the book will be free, courtesy of…

Microsoft Clustering on vSphere – Incompatible Device Errors

When setting up a Microsoft Cluster with nodes running in vSphere Virtual Machines across ESXi hosts, I have come across folks who have experienced Incompatible device backing specified for device ‘0’ errors. These are typically a result of the RDM (Raw Device Mapping) setup not being quite right. There can be a couple of reasons for this, as highlighted here. Different SCSI Controller On one occasion, the RDM was mapped to the same SCSI controller as the Guest OS boot disk. Once the RDM was moved to its own unique SCSI controller, it resolved the issue. Basically, if the OS disk…