There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously. We are also looking for feedback on what you perceive as a best practice. The thing about best practices is…
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.
I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.
I was asked recently to provide some assistance with a VSA installation problem. The issue which this person experienced is described in the release notes for VSA 5.1.1 . VSA 5.1.1 installation fails with the Error 2896: Executing action failed message This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed. Workaround: Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive.
In a few recent posts, I’ve been looking at performance counters in vSphere 5.1. One of my colleagues, Hugo Strydom, reached out to me about doing a vCenter Operations (vCOps) custom dashboard to monitor the new Storage I/O Control (SIOC) counters in vSphere 5.1 which I detailed here. Hugo has done a whole series of great blog posts on vCOps on his blog site. I thought it would really cool to get this setup on my environment and take a look.
There was an interesting question posted recently around how you could monitor Storage I/O Control activity. Basically, how would one know if SIOC had kicked in and was actively throttling I/O queues? Well, in vSphere 5.1, there are some new performance counters that can help you with that.
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism. Most of the initial configuration is done in the Option ROM of the NIC. Suitable NICs contain what is called either a FCoE Firmware Boot Table (FBFT) or a FCoE Boot Parameter Table (FBPT). For the purposes of this post, we’ll refer to it as the…