Virsto Software for vSphere Overview

Virsto Software I’d met Virsto Software at previous VMworld conferences, but never had a chance to have a meaningful discussion regarding their products and solutions. On a recent trip to the US, I had the pleasure of meeting with Eric Burgener at the Virsto offices in Sunnyvale. He kindly took the time to give me an overview of their Virsto for vSphere 1.5 product.

Overview
Virsto Software aims to provide the advantages of VMware’s linked clones (single image management, thin provisioning, rapid creation) but deliver better performance than EZT VMDKs.

To achieve this, Virsto provide two components – a software storage appliance and a service which runs on the ESXi host. Block storage devices from your traditional SAN are first mapped as RDMs (Raw Device Mappings) to the storage appliance. The appliance then takes these devices, creates a very large storage pool (called the vSpace) and a log (called the vLog), and presents this object as an NFS datastore to your ESXi hosts. The Virsto appliance can then apply its own ‘secret sauce’ to how I/Os are handled, and requests to create VMDKs (Virtual Machine Disks) in fact instantiate Virsto “vDisks” under the covers. However these Virsto “vDisks” do indeed look like native thin VMDKs. This is important as it allows vSphere Administrators to understand that they can manage this storage using the standard VMware workflows.

The Virsto (NFS) Datastore

From an I/O perspective, all reads go directly to the vSpace pool. Virsto implements a locality of reference for each of the VMs deployed on the datastore. This allows reads to be handled sequentially in most cases. Virsto estimates that using their vSpace provides a read performance improvement that could be 30/40% higher than reads going directly to the SAN.

All writes go to the circular log. Once received into the vLog, an acknowledgment is sent back to the initiator and the write is later destaged from the vLog to vSpace. Before destaging occurs, I/Os can be reassembled to allow destaging to take place on contiguous I/O chunks. Since this is a circular log with regular destaging, Virsto estimate that 10GB per host is all that is needed. For best performance, Virsto suggest that the log is placed on a very fast HDD or even an SSD. With this circular log approach, Virsto estimate that they can achieve a 10 fold increase in write performance over standard SANs.

The nice thing with the Virsto appliance is that it is very lightweight – only 1 vCPU & 1GB RAM. And Virsto can virtualize up to 1 PetaByte of back-end storage. Using their “vDisk” technology, Virsto tell me that they have the ability to deliver upwards of 10,000 snapshots and writable clones, something that could be of interest to potential VDI customers.

VMDK format
I asked Eric to elaborate a little more on their “vDisk” technology. It would appear that VMDKs are created using standard VMware workflows, but behind the scenes, they are deployed as thin Virsto “vDisks” onto the NFS datastore. However with the Virsto appliance, there is no overhead when extending the thin VMDK (traditional thin disks have to do zero on write operation). Virsto claim that thin VMDKs deployed on the Virsto appliance can therefore outperform the standard eager zeroed thick (EZT) format VMDK. The other neat thing about Virsto “vDisks” is that space can be reclaimed from within the VMDK, making them very space efficient. This has been a major pain point for many customers.

vSphere Integration
There is a vSphere client plugin for the Virsto appliance for management functionality, though some operations would still need to be done outside of the client. There is no VAAI functionality at the time of writing, but Virsto are working on implementing the Fast File Copy primitive to allow VMware linked-clones use Virsto native snapshots for VDI solutions.

Failure Handling
The immediate question is what happens to the outstanding writes in the circular vLog which have not yet destaged and the Virtual Machine has a failure? Eric explained that Virsto can take the vLog and attach it to a another host in the cluster. Once it attached, everything in the log device gets flushed (in about 10 to 15 secs) to the vSpace. VMs which have failed, and are then restarted by vSphere HA, will automatically have access to all their data.

What about a failure on the appliance itself (or indeed the ESXi host on which it resides)? This is where vSphere HA again plays a role – the Virsto appliance is being monitored by VMware HA using VMware Tools Heartbeat Monitoring, so if it fails, it gets re-started.

Any writes in process that have not been acknowledged by the log will have to be re-submitted after recovery, but anything which has been committed to the logs is not lost on failover.

In most cases, recovery from a Virsto appliance failure happens quickly enough that the VMs on the host don’t even need to be re-started (if it was just a Virsto appliance and not an ESXi host failure). If it was a host failure then the recovery order is (1) replay the log from the failed host, (2) start the VMs elsewhere in the cluster (the Virsto service doesn’t need to re-start because it’s already running on every other node in that cluster).

See Virsto at VMworld.
Virsto are an exhibitor at VMworld 2012 at booth 414. In addition to providing a demonstration of their new Virsto Software for vSphere (version 1.5), I have been told by the Virsto guys that they are also going to do a demonstration of Virsto working with EMC’s new VFCache. Definitely worth checking out in my opinion.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan