Violin Memory & 1 Million IOPs from a single VM
Another Flash Array vendor that I wanted to meet with at this years VMworld in San Francisco was Violin Memory. For those of you who have been following the keynotes at VMworld 2012, one of the things which will have stood out will have been the 1 million IOPs from a single VM. Now 1 million IOPs isn’t something new. Last year, VMware’s performance team published a paper on how they achieved 1 million IOPs from a single vSphere 5.0 host running six virtual machines. But this year, we achieved 1 million IOPs from a single Virtual Machine. And guess what storage the VM was running on? Yep, a Violin Memory Flash Array.
At Violin’s booth at VMworld San Francisco, I caught up with an old friend and colleague of mine, Vinay Gaonkar, and Violin’s director of marketing, Ashish Gupta. They gave me a good overview of Violin Memory’s new 6×00 arrays, and we discussed the various vSphere integration points.
First I asked about the storage object which Violin surfaces up to the ESXi host & which storage protocols the array supports. They can create multiple LUNs of same capability inside of a given array and export them out to the ESXi host. Violin support Fibre Channel, InfiniBand and iSCSI (over 10GigE).
I then went on to ask about the VAAI primitives that Violin currently support on their arrays. There are none supported at present, but Ashish told me that they plan on supporting all the primitives in the very near future. They have started work on implementing the Write Same and XCopy primitives already, with others following soon.
My next question was about the snapshot and clone technology on the 6×000 series arrays. Is it VM-Centric or Volume-Centric? In other words, does it recognise the VMDK as a storage object or just a file on a LUN? Today it is LUN centric, but as Ashish points out, once Viloin integrates with VMware’s upcoming Virtual Volumes technology, their snapshots will of course become VM-Centric. I asked a similar question around their replication technology, and again Ashish stated that it was LUN orientated presently. Again, Virtual Volumes will make a difference here.
My next question was around BC/DR support, and if there were any plans for an SRM/SRA support with their replication technology. Ashish stated that yes, they plan to support Site Recovery Manager and they are hoping to have a solution in place in H1 2013.
One thing I noticed at the booth is that Violin have a vSphere plugin for Management & Monitoring (above). However this is currently C# so I asked about their plans for a web client plugin. Ashish again said that they expect to have a web client plugin for management and monitoring by H1 2013.
I finally finished with a question around Violin’s plans for a VASA plugin? You guessed it – H1 2013.
Violin Memory are a sponsor of VMworld EMEA and will be in Barcelona on October, 2012. If a Monster VM is what you need, then Violin Memory’s Flash Arrays are definitely worth checking out. 1 million IOPS in a 3U, with no single point of failure because of their fully redundant configuration and vRAID algorithms, is a very compelling solution indeed. They have two kinds of arrays, MLC and SLC. While some of the vSphere integration points are not yet there, its good to hear that there is a concerted effort from Violin to deliver on VAAI, VASA, SRM/SRA & a plugin for the web client.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
6 Replies to “Violin Memory & 1 Million IOPs from a single VM”
Excellent…. Would be great if we could have some performance benchmarks from Violin on vSphere 5.x.
There is this Sunny – http://blogs.vmware.com/performance/2012/08/1millioniops-on-1vm.html. However we don’t have an official whitepaper just yet. Looking at the post, we can expect something post VMworld.
Violin was part of Cisco VMmark benchmark world record.
In addition, check my blog on 1M IOPs for 1VM at to get more perspective on this.
Thanks for sharing Vinay.
Thanks @Cormac & @Vinay…
Comments are closed.