Does Storage DRS work with Adaptive Queuing?

Many of you will be aware that Storage DRS uses Storage I/O Control (SIOC) for load balancing based on I/O metrics. However a statement in one of our white papers has raised a few questions recently with both our customers and partners. The statement is as follows:

“Queue depth throttling is not compatible with Storage DRS”. (pg.34) fromĀ  http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf.

This assertion led many to believe that Storage DRS would not work well with Adaptive Queuing (AQ), another of VMware’s queue depth throttling mechanisms. However internally, many felt that this wasn’t a true statement, but some work was needed to verify that it would not cause any issues. This led to a number of tests being run with Storage DRS and both of our queue throttling features, SIOC and Adaptive Queuing. I am using this post to share those results.

Continue reading

PernixData revisited – a chat with Frank Denneman

pernixdataI’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.

Continue reading

A closer look at Fusion-io ioControl 3.0

fusion-io_logo-300x133Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.

Continue reading

QLogic – Execution Throttle Feature Concerns

QlogicI had a customer reach out to me recently to discuss VMware’s Storage I/O Control behavior and Adaptive Queuing behavior and how it works with QLogic’s Execution Throttle feature. To be honest, I didn’t have a good understanding of the Execution Throttle mechanism from QLogic so I did a little research to seeĀ  if this feature inter-operates with VMware’s own I/O congestion management features.

Continue reading

A closer look at NetApp clustered Data ONTAP

netapp-logoI’ve been having some interesting discussions with my friends over at NetApp recently. I wanted to learn more about their new clustered Data ONTAP 8.2 features and its new scale-out functionality. In the storage array world, traditional scale-up mechanisms usually involved either replacing disk drives with faster/newer models or replacing old array controllers with newer controllers. In worst case scenarios, fork lift upgrades are required to do a technology refresh of your array. Another approach, scale-out, is fast becoming the accepted way of handling storage requirements going forward. Scale out storage is now big news. With scale-out, you simply add additional resources to your already existing shared storage pool.

Over the past year I have been to a number of VMUGs (VMware User Group) meetings and have sat in on some of the NetApp sessions on their clustered Data ONTAP release. NetApp have also realized that the demand is there for scale-out, and they have introduced their very own unified scale-out storage solution called clustered Data ONTAP. Basically, this allows you to take a bunch of different NetApp storage array models and cluster them together to provide a single, unified and virtualized share storage pool. Using clustered Data ONTAP 8.2, NetApp customers can now increase scalability using a scale-out rather than a scale-up approach. Let’s look at clustered Data ONTAP and some of the new features it brings in more detail. Continue reading

A closer look at SolidFire

solidfireAll Flash Arrays continue to make the news. Whether it is EMC’s XtremIO launch or Violin Memory’s current market woes, there is no doubt that AFAs continue to generate a lot of interest. Those of you interested in flash storage will not need an introduction to SolidFire. These guys were founded by Dave Wright (ex-RackSpace) and have been around since 2009. I have been trying to catch up with SolidFire for sometime as I’d heard their pitch around Quality of Service on a per volume basis and wanted to learn more, especially how it integrated with vSphere features. Recently I caught up with Dave Cahill and Adam Carter of SolidFire to have a chat about SolidFire in general and what the VMware integration points are.

Continue reading

SIOC and datastores spread across all spindles in the array

This is a query which has come up on numerous occasions in the past, especially in the comments section of a blog post on debunking SIOC myths on the vSphere Storage Blog. This post is to highlight some recommendations which should be implemented when you have a storage array which presents LUNs which are spread across all spindles, or indeed multiple LUNs all being backed by the same set of spindles from a particular aggregate or storage pool.

Continue reading