I was in a conversation with one of my pals over at Tintri last week (Fintan), and he observed some strange behaviour when provisioning VMs from a catalog in vCloud Director (vCD). When he disabled Fast Provisioning, he expected that provisioning further VMs from the catalog would still be offloaded via the VAAI-NAS plugin. All the ESXi hosts have the VAAI-NAS plugin from Tintri installed. However, it seems that the provisioning/cloning operation was not being offloaded to the array, and the ESXi hosts resources were being used for the operation instead. Deployments of VMs from the catalogs were taking minutes rather than seconds. What was going on?
I was involved in some conversations recently on how the VAAI UNMAP command behaved, and what were the characteristics which affected its performance. For those of you who do not know, UNMAP is our mechanism for reclaiming dead or stranded space from thinly provisioned VMFS volumes. Prior to this capability, the ESXi host had no way of informing the storage array that the space that was being previously consumed by a particular VM or file is no longer in use. This meant that the array thought that more space was being consumed than was actually the case. UNMAP, part of the vSphere APIs for Array Integration, enables administrators to overcome tho challenge by telling the array that these blocks on a thin provisioned volume are no longer in use and that they can be reclaimed.
Nimble Storage are another company who have been making a lot of waves in the world of storage in recent years. Based in San Jose, CA, they IPO’ed earlier this year, and have something in the region of 600 employees worldwide at the present. I caught up with Wen Yu, who I have known from my early days at VMware where we worked together in the support organization. Wen moved over to Nimble a couple of years back and now is a technical evangelist at Nimble. Actually, Nimble were the subject of the very first post on this blog site when I launched it almost 2 years ago. At the time I wrote about some significant architectural updates in their 2.0 release. My understanding is that their next major release (2.1) is just around the corner. So this was a good time to chat with Wen about some new features and other things happening in the Nimble world.
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.
Pure Storage are all over the news at the moment. They just secured another round of funding (225 million to be precise), and are now valued at over 3 billion. You can read more about that here. However, even before this announcement, I had already arranged to have a catch up chat with Pure’s primary evangelist (and a good pal of mine), Vaughn Stewart. I was surprised to see that it had been 18 months since I last did a piece on Pure so I really did want to see what changes they had made in the meantime as there were a few vSphere interoperability pieces still to be completed when we last spoke.
We just got notification about a potential issue with the VAAI UNMAP primitive when used on EMC VMAX storage systems with Enginuity version 5876.159.102 or later. It seems that during an ESXi reboot, or during a device ATTACH operation, the ESXi may report corruption. The following is an overview of the details found in EMC KB 184320. Other symptoms include vCenter operations on virtual machines fail to complete and the following errors might be found in the VMkernel logs:
WARNING: Res3: 6131: Invalid clusterNum: expected 2624, read 0 [type 1] Invalid totalResources 0 (cluster 0).[type 1] Invalid nextFreeIdx 0 (cluster 0).
WARNING: Res3: 3155: Volume aaaaaaaa-bbbbbbbb-cccc-dddddddddddd ("datastore1") might be damaged on the disk. Resource cluster metadata corruption has been detected
I thought it was about time that I looked at some of the larger storage vendors closer to home. One of these is of course Bull. This company is probably more familiar to those of us based in Europe rather than those of you based in the Americas or Asia Pacific. However VMware customers in EMEA will have seen them in the Solutions Exchange at VMworld Europe, where they have a reasonably large presence. After some conversation with my good pal Didier Pironet, whom I’ve met at a couple of recent VMUGs, I was introduced to Philippe Reynier who is a manager in the Bull StorWay Competence Center and Solution Center. Philippe provided me with a lot of good detail on Bull’s storage solutions which I will share with you here.