Welcome to the next instalment of NFS Best Practices. In this fourth and final best practice section on NFS, I asked a number of our major NAS storage partners some sizing questions. The questions basically fell into 3 different categories: Do you have a recommended NFS volume size/sweet-spot recommendation? Do you have a volume block size recommendation that should be configured on the array? Do you have a recommended number of VMs per NFS datastore? In fact, the responses from the vendors were all pretty similar. Let’s take a look at what they had to say. Maximum volume Size/Sweet Spot…
I recently received a question about the following message appearing in the VMkernel logs of an ESXi host: 2012-12-07T12:15:58.994Z cpu17:420151)ScsiDeviceIO: 6340: Could not detect setting of sitpua for device naa.xxx. Error Not supported. So what does that mean? Firstly, it isn’t anything to be greatly concerned about. SITPUA, short for Single Initiator Thin Provisioning Unit Attention, is related to Out Of Space (OOS) conditions on Thin Provisioned LUNs. To ensure that an Out Of Space (OOS) warning is sent to just one host using the affected LUN, the SITPUA bit in the Thin Provisioning Mode Page must be set to…
Welcome to part 3 of the NFS Best Practices series of posts. While part 1 looked at networking and part 2 looked at configuration options, this next post will look at interoperability with vSphere features. We are primarily interested in features which are in some way related to storage, and NFS storage in particular. While many of my regular readers will be well versed in most of these technologies, I’m hoping there will still be some items of interest. Most of the interoperability features are tried and tested with NFS, but I will try to highlight areas that might be…
As many of you are aware, VMware made a number of announcements at VMworld 2012. There were three technical previews in the storage space. The first of these was on Virtual Volumes (VVols), which is aimed at making storage objects in virtual infrastructures more granular. The second was Virtual SAN (VSAN), previously known as Distributed Storage, a new distributed datastore using local ESXi storage. The final one was Virtual Flash (vFlash). However, rather than diving into vFlash, I thought it might be more useful to take a step back and have a look at flash technologies in general.
Following on from part 1 of the NFS Best Practices, part 2 is going to look at tuning from a vSphere perspective. As mentioned, our objective is to update the NFS Best Practice white paper which is now rather dated. There are quite a number of tuneable parameters which are available to you when using NFS datastores. Before we drill into these advanced settings in a bit more detail, it is important to understand that the recommended values for some of these settings may (and probably will) vary from storage array vendor to storage array vendor. My objective is to…
There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously. We are also looking for feedback on what you perceive as a best practice. The thing about best practices is…
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.