We made a number of enhancements to Storage DRS in vSphere 6.0. This article will discuss the changes and enhancements that we have made. There is a white paper which discusses many of the previous limitations of Storage DRS interoperability and I’d recommend reviewing it. Although a number of years old, it highlights many of the Storage DRS interoperability concerns. As you will see, a great any of these have now been addressed, along with some pretty interesting feature enhancements.
As many of you are aware, I was at VMworld in San Francisco last week. I wrote a number of articles about some VMware storage announcements, such as EVO:RAIL, VAIO and VVols. However there were, as usual, quite a number of storage vendors at this years conference. One of the vendors that I really want to learn more about was Kaminario, an all flash array vendor that I’d heard a lot of things about. I had the pleasure of spending some time at the Kaminario booth with Shai Maskit who is a senior Product Manager with Kaminario. I posed my usual set of questions to learn a bit more about their AFA products. Continue reading
Many of you will be aware that Storage DRS uses Storage I/O Control (SIOC) for load balancing based on I/O metrics. However a statement in one of our white papers has raised a few questions recently with both our customers and partners. The statement is as follows:
“Queue depth throttling is not compatible with Storage DRS”. (pg.34) from http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf.
This assertion led many to believe that Storage DRS would not work well with Adaptive Queuing (AQ), another of VMware’s queue depth throttling mechanisms. However internally, many felt that this wasn’t a true statement, but some work was needed to verify that it would not cause any issues. This led to a number of tests being run with Storage DRS and both of our queue throttling features, SIOC and Adaptive Queuing. I am using this post to share those results.
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.
[Updated] This is a very short post as I only learnt about this recently myself. I thought it was only available in vSphere 5.5 but it appears to be in vSphere 5.1 too. Anyhow Storage DRS now has a new setting that allows you to configure the default VM affinity setting. Historically, VMDKs from the same virtual machine were always kept together on the same datastore by default; you had to set a VMDK anti-affinity rule to keep them apart. Now you can set a default for this option, which can either be to keep VMDKs together on the same datastore or to keep VMDKs apart on different datastores.
All Flash Arrays continue to make the news. Whether it is EMC’s XtremIO launch or Violin Memory’s current market woes, there is no doubt that AFAs continue to generate a lot of interest. Those of you interested in flash storage will not need an introduction to SolidFire. These guys were founded by Dave Wright (ex-RackSpace) and have been around since 2009. I have been trying to catch up with SolidFire for sometime as I’d heard their pitch around Quality of Service on a per volume basis and wanted to learn more, especially how it integrated with vSphere features. Recently I caught up with Dave Cahill and Adam Carter of SolidFire to have a chat about SolidFire in general and what the VMware integration points are.
I wrote about this issue on the vSphere blog some time back. Essentially, the issue described in that post was if a VM that was being replicated via vSphere Replication was migrated to another datastore, it triggered a full resync because the persistent state files (psf) which tracks the changes were deleted. All of the disks contents are then reread and check-summed on each side. This can have a significant impact on vSphere Replication’s RPO (Recovery Point Objective).