NetApp Flash Accel Launch
Last year, NetApp announced a new host side cache accelerator feature to compliment their Virtual Storage Tiering (VST) technology. Rather than keeping all your data in flash, VST places hot data in flash while moving cold data to cheaper and slower media. NetApp are offering this as an end-to-end technology, from server to array controller (Flash Cache) to disk pools (Flash Pools). One of the major parts of this is Flash Accel, which was also announced in the latter part of last year, and is the server-side flash component of VST. On the back of their recently announced All Flash Array, NetApp are also making Flash Accel available to the general public.
I had the opportunity to catch up with Chittur Narayankumar (Kumar) of NetApp’s Technical Marketing team, who has a responsibility for Flash Accel. Kumar gave me a preview of what Flash Accel can do for joint VMware/NetApp customers. As mentioned, Flash Accel extends the NetApp VST technology to the server. The server-side flash device can be SSD or PCIe flash card, so long as the PCIe flash card presents a block SCSI device to the host. At the time of writing, Flash Accel is a write thru (read) cache, and does not do write back (write) caching.
Of course, the big question you might ask is around cache coherency. If multiple hosts are caching the data, and the data changes at the back-end on the array (e.g. a SnapRestore operation of a VM), how do hosts with cached blocks verify that its data is valid? Flash Accel compares its cached metadata with the array to decide if cached blocks need to be evicted. One nice part of this is that Netapp will only evict the data that has changed on the back-end. They highlight the fact that this compares favorably against other vendors which do a complete cache eviction, and then you have to wait a considerable length of time for the flash to warm up again.
They also highlight that Flash Accel is persistent, which means that it is maintained during reboots, which in turn means that there is a much shorter warm up time.
In further discussions with Kumar, he told me that what NetApp eventually wants to do is to build a mechanism that allows communication between Flash Accel and Flash Cache on the array so that in the event of a cache miss, Flash Cache can inform Flash Accel that is has the data it needs, rather than Flash Accel having to go straight to disk to get the data. That would be very cool.
NetApp have entered into an agreement with FusionIO to resell their ioMemory card. To leverage Flash Accel, a VIB is installed on the ESXi host. The Flash device can then be carved up into multiple cachespaces, which can then be presented to the Guest OS. You can have only one cachespace per VM but you can enable caching for up to 32 VM’s. The Guest OS must have an agent installed to leverage the flash cachespace. The agent is only available for Windows 2008 R2 at the moment. I’m guessing people would most like to see support for additional Guest OS (such as Linux). All of the Flash Accel configuration work is done via a new Flash Accel Management console. It is used for installation, provisioning, and assigning cache to VMs. This console is available as a plugin to VSC 4.2, which runs in VMware vCenter.
At the time or writing, Flash Accel is only available for vSphere 5.0 hosts. Support for 5.1 will come in later release. The initial release of Flash Accel does not have support for vSphere features such as vMotion, DRS & HA. However, I understand that these are definitely on the roadmap for future updates.
It seems like the march towards flash continues unabated, with almost all the major storage vendors embracing both all flash arrays and server-side caching solutions. Exciting times!