One of the long-awaited features introduced with vSphere 5.1 was VOMA (vSphere On-disk Metadata Analyzer). This is essentially a filesystem checker for both the VMFS metadata and the LVM (Logical Volume Manager). Now, if you have an outage either at the host or storage side, you have a mechanism to verify the integrity of your filesystems once everything comes back up. This gives you peace of mind when wondering if everything is ok after the outage. There is a requirement however to have the VMFS volume quiesced when running the VOMA utility. This post will look at some possible reasons…
This is a follow-up to my previous post on the 5.0U2. At the same time, VMware also released vCenter 5.1.0b. This post will look at the storage items which were addressed in that update, although the issues that are addressed in the storage space are relatively minor compared to the enhancements made in other areas. Note that this update is for vCenter only – there is no ESXi 5.1 update.
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.
Last week I published a blog article about new performance counters in vSphere 5.1 for monitoring Storage I/O Control (SIOC). Soon after publishing, I received a question about the siocActiveTimePercentage counter, which only ever showed 0% or 100% values.
There was an interesting question posted recently around how you could monitor Storage I/O Control activity. Basically, how would one know if SIOC had kicked in and was actively throttling I/O queues? Well, in vSphere 5.1, there are some new performance counters that can help you with that.
Another feature which was introduced in vSphere 5.1 & vCloud Director 5.1 was the interoperability between vCloud Director & Storage DRS. Now vCloud Director can use datastore clusters for the placements of vCloud vApps, and allow Storage DRS do what it does best – choose the best datastore in the datastore cluster for the initial placement of the vApp, and then load balance the capacity and performance of the datastores through the use of Storage vMotion.
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism. Most of the initial configuration is done in the Option ROM of the NIC. Suitable NICs contain what is called either a FCoE Firmware Boot Table (FBFT) or a FCoE Boot Parameter Table (FBPT). For the purposes of this post, we’ll refer to it as the…