Welcome to part 3 of the NFS Best Practices series of posts. While part 1 looked at networking and part 2 looked at configuration options, this next post will look at interoperability with vSphere features. We are primarily interested in features which are in some way related to storage, and NFS storage in particular. While many of my regular readers will be well versed in most of these technologies, I’m hoping there will still be some items of interest. Most of the interoperability features are tried and tested with NFS, but I will try to highlight areas that might be…
Following on from part 1 of the NFS Best Practices, part 2 is going to look at tuning from a vSphere perspective. As mentioned, our objective is to update the NFS Best Practice white paper which is now rather dated. There are quite a number of tuneable parameters which are available to you when using NFS datastores. Before we drill into these advanced settings in a bit more detail, it is important to understand that the recommended values for some of these settings may (and probably will) vary from storage array vendor to storage array vendor. My objective is to…
There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously. We are also looking for feedback on what you perceive as a best practice. The thing about best practices is…
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.
I was asked recently to provide some assistance with a VSA installation problem. The issue which this person experienced is described in the release notes for VSA 5.1.1 . VSA 5.1.1 installation fails with the Error 2896: Executing action failed message This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed. Workaround: Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive.
There was an interesting question posted recently around how you could monitor Storage I/O Control activity. Basically, how would one know if SIOC had kicked in and was actively throttling I/O queues? Well, in vSphere 5.1, there are some new performance counters that can help you with that.
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism. Most of the initial configuration is done in the Option ROM of the NIC. Suitable NICs contain what is called either a FCoE Firmware Boot Table (FBFT) or a FCoE Boot Parameter Table (FBPT). For the purposes of this post, we’ll refer to it as the…