A few weeks, my good pal Cody Hosterman over at Pure Storage was experimenting with VAAI and discovered that he could successfully UNMAP blocks (reclaim) directly from a Guest OS in vSphere 6.0. VAAI are the vSphere APIs for Array Integration. Cody wrote about his findings here. Effectively, if you have deleted files within a Guest OS, and your VM is thinly provisioned, you can tell the array through this VAAI primitive that you are no longer using these blocks. This allows the array to reclaim them for other uses. I know a lot of you have been waiting for this functionality for some time. However Cody had a bunch of questions and reached out to me to see if I could provide some answers. After conversing with a number of engineers and product managers here at VMware, here are some of the answers to the questions that Cody asked.
The more astute of you who have already moved to vSphere 6.0, and like looking at CLI outputs, may have observed some new columns/fields in the PSA claimrules when you run the following command:
# esxcli storage core claimrule list --claimrule-class=VAAI
The new fields are as follows (slide right to view full output):
XCOPY Use Array Reported Values XCOPY Use Multiple Segments XCOPY Max Transfer Size ------------------------------- --------------------------- ----------------------- false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0
In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs. However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.
In vSphere 6.0, an improvement has been made to how we handle I/O issues, such as flaky drivers, misbehaving firmware, dropped frames, fabric disruption, dodgy array firmware, and so on which can cause I/O failures. The issue is that, previously, we continually retry these sorts of I/O errors, which can lead to all sorts of additional problems. In this release we are changing our behaviour for marking a path dead.
Nimble Storage are another company who have been making a lot of waves in the world of storage in recent years. Based in San Jose, CA, they IPO’ed earlier this year, and have something in the region of 600 employees worldwide at the present. I caught up with Wen Yu, who I have known from my early days at VMware where we worked together in the support organization. Wen moved over to Nimble a couple of years back and now is a technical evangelist at Nimble. Actually, Nimble were the subject of the very first post on this blog site when I launched it almost 2 years ago. At the time I wrote about some significant architectural updates in their 2.0 release. My understanding is that their next major release (2.1) is just around the corner. So this was a good time to chat with Wen about some new features and other things happening in the Nimble world.
I thought it was about time that I looked at some of the larger storage vendors closer to home. One of these is of course Bull. This company is probably more familiar to those of us based in Europe rather than those of you based in the Americas or Asia Pacific. However VMware customers in EMEA will have seen them in the Solutions Exchange at VMworld Europe, where they have a reasonably large presence. After some conversation with my good pal Didier Pironet, whom I’ve met at a couple of recent VMUGs, I was introduced to Philippe Reynier who is a manager in the Bull StorWay Competence Center and Solution Center. Philippe provided me with a lot of good detail on Bull’s storage solutions which I will share with you here.
There are many occasions where the information displayed in the vSphere client is not sufficient to display all relevant information about a particular storage device, or indeed to troubleshoot problems related to a storage device. The purpose of this post is to explain some of the most often used ESXCLI commands that I use when trying to determine storage device information, and to troubleshoot a particular device.