Although I didn’t attend EMC World this year, there were a lot of interesting announcements. I managed to catch up with Matt Cowger (who sorts of sits between both the EMC & VMware camps) and ran through some of the main highlights from this year’s conference. There has been a lot written about EMC World already (and I mean a lot) so I’m going to try to keep the highlights to a minimum, and provide links to where you can read more.
In this post, we talk about a particular behaviour with using the default (or None) policy with VSAN. I have stated many times in the past that when a VM is deployed on the VSAN datastore, it behaves like it is thinly provisioned unless the capability ‘Object Space Reservation’ (OSR) is specified in the VM Storage Policy. The OSR will pre-allocate space on the VSAN datastore for the virtual machine’s storage objects, and is specified as a percentage of the actual VMDK size. However, there is a slightly different behaviour when the default policy is used. Once again, I was in a conversation with a customer who stated that when he used the default policy of “None”, he could see space being consumed on the VSAN datastore was equal to the size of the VMDK * FTT (Number of Failures To Tolerate). He wondered why this was the case when the default policy clearly did not contain an Object Space Reservation capability.
Nimble Storage are another company who have been making a lot of waves in the world of storage in recent years. Based in San Jose, CA, they IPO’ed earlier this year, and have something in the region of 600 employees worldwide at the present. I caught up with Wen Yu, who I have known from my early days at VMware where we worked together in the support organization. Wen moved over to Nimble a couple of years back and now is a technical evangelist at Nimble. Actually, Nimble were the subject of the very first post on this blog site when I launched it almost 2 years ago. At the time I wrote about some significant architectural updates in their 2.0 release. My understanding is that their next major release (2.1) is just around the corner. So this was a good time to chat with Wen about some new features and other things happening in the Nimble world.
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.
Pure Storage are all over the news at the moment. They just secured another round of funding (225 million to be precise), and are now valued at over 3 billion. You can read more about that here. However, even before this announcement, I had already arranged to have a catch up chat with Pure’s primary evangelist (and a good pal of mine), Vaughn Stewart. I was surprised to see that it had been 18 months since I last did a piece on Pure so I really did want to see what changes they had made in the meantime as there were a few vSphere interoperability pieces still to be completed when we last spoke.
Those of you familiar with VSAN will know that one of the capabilities which can be placed in a VM Storage Policy is Number of Disk Stripes Per Object (stripe width for short). I covered this in an earlier post which looked at the various VSAN capabilities. Recently, a customer who had not specified a stripe width in the VM Storage Policy was perplexed to find that his storage objects had indeed been striped across a number of disks. He reached out to me if I could provide an explanation.
I watched a very cool demonstration this morning from the All Flash Array vendor, SolidFire. I spoke with SolidFire at the end of last year, and did a blog post about them here. One of the most interesting parts of our conversation last year was how SolidFire’s QoS feature and VMware’s Storage I/O Control (SIOC) feature could inter-operate. In a nutshell, QoS work at the datastore/volume layer whereas SIOC deals with the VM/VMDK layer. Last week, Aaron Delp and Adam Carter of SolidFire did an introduction to QoS, both on vSphere and on the SolidFire system. And they also did one of the coolest demos that I’d seen in some time, namely how they have managed to get SIOC and QoS to work in tandem.