Yesterday was my first day at VMworld 2014. As usual with this event, there are simply so many interesting announcements that it is hard to keep track. However, for me, there were a few things which stood out in the storage space worth calling out. These are specifically VMware focused products and features. I know that many of our partners have also made announcements in the storage space, but for today I concentrated solely on VMware. There are the two that really caught my attention.
Last year I published a list of storage vendors and partners that I was planning to check out at VMworld 2013. This year is no different, with a number of new arrivals on the storage scene, as well as some super new cool products from many of VMware’s partners. Whilst this is no means a definitive list of what’s on show, these are the ones that I am particularly interested in checking out this year.
On a recent trip to VMware in Palo Alto, I found some time to visit with a good pal of mine, Vinay Gaonkar, who is now the Product Manager for XtremIO over at EMC. Vinay used to be a storage PM at VMware (he worked on the initial phases of VVols), and we worked together on a number of storage items in various vSphere releases. It’s been almost 2 years since I last spoke to the XtremIO folks (VMworld 2012 in fact, when the product still had not become generally available), so I thought that this would be a good time to catch up with them, as we are in the run up to VMworld 2014.
I was in a conversation with one of my pals over at Tintri last week (Fintan), and he observed some strange behaviour when provisioning VMs from a catalog in vCloud Director (vCD). When he disabled Fast Provisioning, he expected that provisioning further VMs from the catalog would still be offloaded via the VAAI-NAS plugin. All the ESXi hosts have the VAAI-NAS plugin from Tintri installed. However, it seems that the provisioning/cloning operation was not being offloaded to the array, and the ESXi hosts resources were being used for the operation instead. Deployments of VMs from the catalogs were taking minutes rather than seconds. What was going on?
I was involved in some conversations recently on how the VAAI UNMAP command behaved, and what were the characteristics which affected its performance. For those of you who do not know, UNMAP is our mechanism for reclaiming dead or stranded space from thinly provisioned VMFS volumes. Prior to this capability, the ESXi host had no way of informing the storage array that the space that was being previously consumed by a particular VM or file is no longer in use. This meant that the array thought that more space was being consumed than was actually the case. UNMAP, part of the vSphere APIs for Array Integration, enables administrators to overcome tho challenge by telling the array that these blocks on a thin provisioned volume are no longer in use and that they can be reclaimed.
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.
I’ve been doing a bit of work over the past number of weeks on the adapters for vCenter Operations (vC OPs) with my old pal Paudie. We are working on vCenter Operations 5.8 and using a vSphere 5.5U1 environment. Since we have a Brocade Fibre Channel switch and an EMC VNX array in our lab, I wanted to get the Management Pack for Storage Devices (MPSD) and the Brocade SAN Analytics Management Pack deployed, and see what information we could glean from those extension packs. When we completed the configuration, we were able to go into the vC OPs customs view and see details like the following Brocade – Health Overview and Storage Components Heatmap:
Caution: We spent a lot of time trying to figure out why the MPSD adapter would not connect to the CIMOM service on Brocade’s Network Advisor. This boiled down to networking/DNS configuration issues. The MPSD release notes for vC OPs describe the issue. As they say, I should have RTFM. Anyhow, here are the steps we went through to get this setup going. I’m afraid it is rather long, but hopefully you will find the information in here useful.