A closer look at Fusion-io ioControl 3.0

fusion-io_logo-300x133Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.

Continue reading

QLogic – Execution Throttle Feature Concerns

QlogicI had a customer reach out to me recently to discuss VMware’s Storage I/O Control behavior and Adaptive Queuing behavior and how it works with QLogic’s Execution Throttle feature. To be honest, I didn’t have a good understanding of the Execution Throttle mechanism from QLogic so I did a little research to seeĀ  if this feature inter-operates with VMware’s own I/O congestion management features.

Continue reading

A closer look at NetApp clustered Data ONTAP

netapp-logoI’ve been having some interesting discussions with my friends over at NetApp recently. I wanted to learn more about their new clustered Data ONTAP 8.2 features and its new scale-out functionality. In the storage array world, traditional scale-up mechanisms usually involved either replacing disk drives with faster/newer models or replacing old array controllers with newer controllers. In worst case scenarios, fork lift upgrades are required to do a technology refresh of your array. Another approach, scale-out, is fast becoming the accepted way of handling storage requirements going forward. Scale out storage is now big news. With scale-out, you simply add additional resources to your already existing shared storage pool.

Over the past year I have been to a number of VMUGs (VMware User Group) meetings and have sat in on some of the NetApp sessions on their clustered Data ONTAP release. NetApp have also realized that the demand is there for scale-out, and they have introduced their very own unified scale-out storage solution called clustered Data ONTAP. Basically, this allows you to take a bunch of different NetApp storage array models and cluster them together to provide a single, unified and virtualized share storage pool. Using clustered Data ONTAP 8.2, NetApp customers can now increase scalability using a scale-out rather than a scale-up approach. Let’s look at clustered Data ONTAP and some of the new features it brings in more detail. Continue reading

A closer look at SolidFire

solidfireAll Flash Arrays continue to make the news. Whether it is EMC’s XtremIO launch or Violin Memory’s current market woes, there is no doubt that AFAs continue to generate a lot of interest. Those of you interested in flash storage will not need an introduction to SolidFire. These guys were founded by Dave Wright (ex-RackSpace) and have been around since 2009. I have been trying to catch up with SolidFire for sometime as I’d heard their pitch around Quality of Service on a per volume basis and wanted to learn more, especially how it integrated with vSphere features. Recently I caught up with Dave Cahill and Adam Carter of SolidFire to have a chat about SolidFire in general and what the VMware integration points are.

Continue reading

SIOC and datastores spread across all spindles in the array

This is a query which has come up on numerous occasions in the past, especially in the comments section of a blog post on debunking SIOC myths on the vSphere Storage Blog. This post is to highlight some recommendations which should be implemented when you have a storage array which presents LUNs which are spread across all spindles, or indeed multiple LUNs all being backed by the same set of spindles from a particular aggregate or storage pool.

Continue reading

Storage I/O Control – Workload Injector Behaviour

You may remember an enhancement which we made to Storage I/O Control (SIOC) in the 5.1 vSphere release whereby SIOC can now automatically determine the characteristics and thus the latency threshold of a datastore. Prior to this change, SIOC used either a default value or had customers manually set it. Neither of these were ideal, so we introduced this automatic method. However, there was little detail on how often this latency threshold was calculated. In other words, did the calculation take place when SIOC was first enabled, or is there regular on-going calculations?

Continue reading

Heads Up! Device Queue Depth on QLogic HBAs

Just thought I’d bring to your attention something that has been doing the rounds here at VMware recently, and will be applicable to those of you using QLogic HBAs with ESXi 5.x. The following are the device queue depths you will find when using QLogic HBAs for SAN connectivity:

  • ESXi 4.1 U2 – 32
  • ESXi 5.0 GA – 64
  • ESXi 5.0 U1 – 64
  • ESXi 5.1 GA – 64

The higher depth of 64 has been this way since 24 Aug 2011 (the 5.0 GA release). The issue is that this has not been documented anywhere. For the majority of users, this is not an area of concern and is probably a benefit. But there are some concerns.

Continue reading