Many of you will be aware of the new core storage features that were introduced in vSphere 6.5. If not, you can learn about them in this recently published white paper. Without doubt, the feature that has created the most amount of interest is automated unmap (finally, I hear you say!). Now a few readers have asked about the following comment in the automated unmap section.
Automatic UNMAP is not supported on arrays with UNMAP granularity
greater than 1MB. Auto UNMAP feature support is footnoted in the
VMware Hardware Compatibility Guide (HCL).
So where do you find this info in the HCL? I’ll show you here.
I pushed this post out a bit as I know that there is a huge amount of information out there around virtual volumes already. This must be one of the most anticipated storage features of all time, with the vast majority of our partners ready to deliver VVol-Ready storage arrays once vSphere 6.0 becomes generally available. We’ve been talking about VVols for some time now. Actually, even I have been talking about it for some time – look at this tech preview that I did way back in 2012 – I mean, it even includes a video! Things have changed a bit since that tech preview was captured, so let’s see what Virtual Volumes 2015 has in store.
Much kudos to my good friend Paudie who did a lot of this research.
There are many occasions where the information displayed in the vSphere client is not sufficient to display all relevant information about a particular storage device, or indeed to troubleshoot problems related to a storage device. The purpose of this post is to explain some of the most often used ESXCLI commands that I use when trying to determine storage device information, and to troubleshoot a particular device.
A number of new enhancements around Microsoft Clustering Services (MSCS) have been introduced in vSphere 5.5. I wanted to cover those in this post as I know many of you continue to use MSCS for service availability in your vSphere environments.
Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.
I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism.