As many of you are aware, VMware made a number of announcements at VMworld 2012. There were three technical previews in the storage space. The first of these was on Virtual Volumes (VVols), which is aimed at making storage objects in virtual infrastructures more granular. The second was Virtual SAN (VSAN), previously known as Distributed Storage, a new distributed datastore using local ESXi storage. The final one was Virtual Flash (vFlash). However, rather than diving into vFlash, I thought it might be more useful to take a step back and have a look at flash technologies in general.
Following on from part 1 of the NFS Best Practices, part 2 is going to look at tuning from a vSphere perspective. As mentioned, our objective is to update the NFS Best Practice white paper which is now rather dated. There are quite a number of tunable parameters which are available to you when using NFS datastores. Before we drill into these advanced settings in a bit more detail, it is important to understand that the recommended values for some of these settings may (and probably will) vary from storage array vendor to storage array vendor. My objective is to…
There is a project currently underway here at VMware to update the current Best Practices for running VMware vSphere on Network Attached Storage. The current paper is a number of years old now, and we are looking to bring it up to date. There are a number of different sections that need to be covered, but we decided to start with networking, as getting your networking infrastructure correct will play a crucial part in your NAS performance and availability obviously.
One of the new features of vSphere 5.1 was the SSD monitoring and I/O Device Management features which I discussed in this post. I was doing some further testing on this recently and noticed that a number of fields from my SSD were reported as N/A. For example, I ran the following command against a local SSD drive on my host and these were the statistics returned.
I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.
I was asked recently to provide some assistance with a VSA installation problem. The issue which this person experienced is described in the release notes for VSA 5.1.1 . VSA 5.1.1 installation fails with the Error 2896: Executing action failed message This problem might occur when the location of the temp drive is set to a drive other than C:, where VSA Manager is to be installed. Workaround: Make sure that the user and system TEMP and TMP variables point to a specified location on the C: drive.
In a few recent posts, I’ve been looking at performance counters in vSphere 5.1. One of my colleagues, Hugo Strydom, reached out to me about doing a vCenter Operations (vCOps) custom dashboard to monitor the new Storage I/O Control (SIOC) counters in vSphere 5.1 which I detailed here. Hugo has done a whole series of great blog posts on vCOps on his blog site. I thought it would really cool to get this setup on my environment and take a look.