Following on from part 1 of the NFS Best Practices, part 2 is going to look at tuning from a vSphere perspective. As mentioned, our objective is to update the NFS Best Practice white paper which is now rather dated. There are quite a number of tuneable parameters which are available to you when using NFS datastores. Before we drill into these advanced settings in a bit more detail, it is important to understand that the recommended values for some of these settings may (and probably will) vary from storage array vendor to storage array vendor. My objective is to…
There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously. We are also looking for feedback on what you perceive as a best practice. The thing about best practices is…
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.
I was asked recently to provide some assistance with a VSA installation problem. The issue which this person experienced is described in the release notes for VSA 5.1.1 . VSA 5.1.1 installation fails with the Error 2896: Executing action failed message This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed. Workaround: Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive.
There was an interesting question posted recently around how you could monitor Storage I/O Control activity. Basically, how would one know if SIOC had kicked in and was actively throttling I/O queues? Well, in vSphere 5.1, there are some new performance counters that can help you with that.
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism. Most of the initial configuration is done in the Option ROM of the NIC. Suitable NICs contain what is called either a FCoE Firmware Boot Table (FBFT) or a FCoE Boot Parameter Table (FBPT). For the purposes of this post, we’ll refer to it as the…
vSphere 5.1 introduced a number of vCloud Director (vCD) interoperability features from a storage perspective, namely ability to take VM snapshots from within the vCD UI, interoperability with Storage Profiles and interoperability with Storage DRS. Admittedly, its been a while since I played with vCD and I am a little rusty, but I wanted to see how well these storage features worked with vCD 5.1. I’ll follow-up with some future posts on how this all integrates, but this first post is just to highlight an issue I ran into in my haste to get the environment up and running. The…