PernixData FVP 1.0 is GA
Before I left for PTO, I wrote an article on a number of different storage vendors you should be checking out at this year’s VMworld 2013. One of these was a new start-up called PernixData. With tongue firmly in cheek, I suggested that PernixData might use VMworld as a launchpad of their FVP (Flash Virtual Platform) product. Well, needless to say, my good friend Satyam Vaghani, CTO at PernixData, reached out to me to say that they were in fact announcing FVP before VMworld. He shared some details with me, which I can now share with you if you haven’t heard about the announcement.
PernixData are positioning FVP as the ‘first flash hypervisor’. In a nutshell, PernixData FVP clusters server-side flash into a new tier that enables storage performance to scale out independent of capacity. The product is integrated with vSphere through the installation of a VIB (vSphere Installation Bundle), supports VMware cluster operations like DRS & HA, but more importantly, it provides support for both read (write-thru) and write (write-back) cache. Write-back means that the write will be acknowledged before it gets stored on the hard disk. This is allowable since, with FVP, all writes are replicated which protects the data should any single flash device fails. You can now have a virtual machine with both read and write caching, and still do operations such as vMotion, etc. The fact that it is using flash is completely transparent to the virtual machine; it simply benefits from the accelerated I/O. This is something quite unique in my opinion – and remember, this is all done in software. And this is available now.
If you want more detail on PernixData, you can visit their website here where you will find data sheets and white papers. For more details, Frank Denneman has a great series of articles here.
PernixData have also shared some initial pricing details. For their enterprise bundle, the cost is $7,500 per host which allows you to use cache on an unlimited number of virtual machines on that host. I have also heard that PernixData are actively working on additional packages for SMB and Service Providers.
PernixData are a silver sponsor at VMworld 2013. I’d definitely recommend checking them out in the Solutions Exchange if you are attending VMworld this year.
Sounds very interesting. Especially if like where I work, additional capacity is easy and cheap relatively speaking compared to increasing performance as the storage array.
What method does the cache use to replicate cached contents? Does the cache have to to be copied to another node before the write io is acknowledged? Akin to EMC dram cache operations in CX and vnx arrays?
Frank covers a lot of this in his articles. E.g. http://frankdenneman.nl/2013/07/19/write-back-and-write-through-policies-in-fvp/