I’m delighted to announce the availability of a new vSphere 6.5 core storage white paper. The paper covers new features such as VMFS-6 enhancements, policy driven Storage I/O Control, policy driven VM Encryption, NFS and iSCSI improvements and of course new limit increases in vSphere 6.5. There are too many VMware folks to thank for putting this paper together, but you’ll find them all listed in the acknowledgements section. I do want to mention one person however; a very special thanks to Cody Hosterman of Pure Storage who spent a lot of time testing many of these new features, and providing the relevant feedback that could be included in the paper. Thanks Cody.
It has been some time since I last looked at Horizon View on Virtual SAN. The last time was when we first released VSAN, back in the 5.5 days. This was with Horizon View 5.3.1, which was the first release that inter-operated with Virtual SAN. At the time, there was some funkiness with policies. View could only use the default policy at the time, and the default policy used to show up as “none” in the UI. The other issue is that you could not change the default policy via the UI, only through CLI commands. Thankfully, things have come a long way since then. In this post, I will look at how Horizon View 7 inter-operates with Virtual SAN 6.2, concentrating mostly on policies. However, Horizon View 7 also has new vmFork/Instant Clone technology and AppVolumes, and I hope to be able to do some posts on those features running on top of VSAN going forward.
As many of you are aware, I was at VMworld in San Francisco last week. I wrote a number of articles about some VMware storage announcements, such as EVO:RAIL, VAIO and VVols. However there were, as usual, quite a number of storage vendors at this years conference. One of the vendors that I really want to learn more about was Kaminario, an all flash array vendor that I’d heard a lot of things about. I had the pleasure of spending some time at the Kaminario booth with Shai Maskit who is a senior Product Manager with Kaminario. I posed my usual set of questions to learn a bit more about their AFA products. Continue reading →
Many of you will be aware that Storage DRS uses Storage I/O Control (SIOC) for load balancing based on I/O metrics. However a statement in one of our white papers has raised a few questions recently with both our customers and partners. The statement is as follows:
This assertion led many to believe that Storage DRS would not work well with Adaptive Queuing (AQ), another of VMware’s queue depth throttling mechanisms. However internally, many felt that this wasn’t a true statement, but some work was needed to verify that it would not cause any issues. This led to a number of tests being run with Storage DRS and both of our queue throttling features, SIOC and Adaptive Queuing. I am using this post to share those results.
I’m sure Frank Denneman will need no introduction to many of you reading this article. Frank & I both worked in the technical marketing organization at VMware, before Frank moved on to PernixData last year and I moved to Integration Engineering here at VMware. PernixData FVP 1.0 released last year, and I did a short post on them here. I’d seen a number of people discussing new FVP features in the community, especially after PernixData’s co-founder Satyam’s presentation at Tech Field Day 5 (#TFD5). I decided to reach out to Frank, and see if he could spare some time to revisit some of the new features that PernixData is planning to introduce. Fortunately, he did. I started by asking Frank about how PernixData is doing in general, before moving onto the new bits.
Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.
I had a customer reach out to me recently to discuss VMware’s Storage I/O Control behavior and Adaptive Queuing behavior and how it works with QLogic’s Execution Throttle feature. To be honest, I didn’t have a good understanding of the Execution Throttle mechanism from QLogic so I did a little research to see if this feature inter-operates with VMware’s own I/O congestion management features.