Tintri adds new VM-aware features and VAAI Support

During VMworld 2012 in San Francisco, I had a chance to catch up once again with the team from Tintri. My first introduction to Tintri was at last year’s VMworld, where they received runner-up in the ‘Hardware for Virtualization’ category by TechTarget for best of VMworld 2011. Well this year they went one better, and won the Best of VMworld 2012 Gold award in Hardware for Virtualization. And for good reason. Let’s see the enhancements to the Tintri platform over the last 12 months have brought.

  Continue reading

Heads Up! ESXi cannot distinguish between EagerZeroedThick & LazyZeroedThick

VAAI NAS introduced the ability to create LazyZeroedThick & EagerZeroedThick disks on NFS datastores. Without VAAI NAS, one can only create thin VMDKs on NFS datastores. For those of you who are using VAAI NAS plugins, there is an important note in the 5.0U1 release notes that you should be aware of.

  • ESXi cannot distinguish between thick provision lazy zeroed and thick provision eager zeroed virtual disks on NFS datastores with Hardware Acceleration support

Continue reading

Virsto Software for vSphere Overview

Virsto Software I’d met Virsto Software at previous VMworld conferences, but never had a chance to have a meaningful discussion regarding their products and solutions. On a recent trip to the US, I had the pleasure of meeting with Eric Burgener at the Virsto offices in Sunnyvale. He kindly took the time to give me an overview of their Virsto for vSphere 1.5 product.

Overview
Virsto Software aims to provide the advantages of VMware’s linked clones (single image management, thin provisioning, rapid creation) but deliver better performance than EZT VMDKs.

To achieve this, Virsto provide two components – a software storage appliance and a service which runs on the ESXi host. Block storage devices from your traditional SAN are first mapped as RDMs (Raw Device Mappings) to the storage appliance. The appliance then takes these devices, creates a very large storage pool (called the vSpace) and a log (called the vLog), and presents this object as an NFS datastore to your ESXi hosts. The Virsto appliance can then apply its own ‘secret sauce’ to how I/Os are handled, and requests to create VMDKs (Virtual Machine Disks) in fact instantiate Virsto “vDisks” under the covers. However these Virsto “vDisks” do indeed look like native thin VMDKs. This is important as it allows vSphere Administrators to understand that they can manage this storage using the standard VMware workflows.

The Virsto (NFS) Datastore

From an I/O perspective, all reads go directly to the vSpace pool. Virsto implements a locality of reference for each of the VMs deployed on the datastore. This allows reads to be handled sequentially in most cases. Virsto estimates that using their vSpace provides a read performance improvement that could be 30/40% higher than reads going directly to the SAN.

All writes go to the circular log. Once received into the vLog, an acknowledgment is sent back to the initiator and the write is later destaged from the vLog to vSpace. Before destaging occurs, I/Os can be reassembled to allow destaging to take place on contiguous I/O chunks. Since this is a circular log with regular destaging, Virsto estimate that 10GB per host is all that is needed. For best performance, Virsto suggest that the log is placed on a very fast HDD or even an SSD. With this circular log approach, Virsto estimate that they can achieve a 10 fold increase in write performance over standard SANs.

The nice thing with the Virsto appliance is that it is very lightweight – only 1 vCPU & 1GB RAM. And Virsto can virtualize up to 1 PetaByte of back-end storage. Using their “vDisk” technology, Virsto tell me that they have the ability to deliver upwards of 10,000 snapshots and writable clones, something that could be of interest to potential VDI customers.

VMDK format
I asked Eric to elaborate a little more on their “vDisk” technology. It would appear that VMDKs are created using standard VMware workflows, but behind the scenes, they are deployed as thin Virsto “vDisks” onto the NFS datastore. However with the Virsto appliance, there is no overhead when extending the thin VMDK (traditional thin disks have to do zero on write operation). Virsto claim that thin VMDKs deployed on the Virsto appliance can therefore outperform the standard eager zeroed thick (EZT) format VMDK. The other neat thing about Virsto “vDisks” is that space can be reclaimed from within the VMDK, making them very space efficient. This has been a major pain point for many customers.

vSphere Integration
There is a vSphere client plugin for the Virsto appliance for management functionality, though some operations would still need to be done outside of the client. There is no VAAI functionality at the time of writing, but Virsto are working on implementing the Fast File Copy primitive to allow VMware linked-clones use Virsto native snapshots for VDI solutions.

Failure Handling
The immediate question is what happens to the outstanding writes in the circular vLog which have not yet destaged and the Virtual Machine has a failure? Eric explained that Virsto can take the vLog and attach it to a another host in the cluster. Once it attached, everything in the log device gets flushed (in about 10 to 15 secs) to the vSpace. VMs which have failed, and are then restarted by vSphere HA, will automatically have access to all their data.

What about a failure on the appliance itself (or indeed the ESXi host on which it resides)? This is where vSphere HA again plays a role – the Virsto appliance is being monitored by VMware HA using VMware Tools Heartbeat Monitoring, so if it fails, it gets re-started.

Any writes in process that have not been acknowledged by the log will have to be re-submitted after recovery, but anything which has been committed to the logs is not lost on failover.

In most cases, recovery from a Virsto appliance failure happens quickly enough that the VMs on the host don’t even need to be re-started (if it was just a Virsto appliance and not an ESXi host failure). If it was a host failure then the recovery order is (1) replay the log from the failed host, (2) start the VMs elsewhere in the cluster (the Virsto service doesn’t need to re-start because it’s already running on every other node in that cluster).

See Virsto at VMworld.
Virsto are an exhibitor at VMworld 2012 at booth 414. In addition to providing a demonstration of their new Virsto Software for vSphere (version 1.5), I have been told by the Virsto guys that they are also going to do a demonstration of Virsto working with EMC’s new VFCache. Definitely worth checking out in my opinion.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

SimpliVity Announce OmniCube Storage Appliance

I recently had the pleasure of chatting with  Jesse St. Laurent, Product Director at a new storage startup called SimpliVity. SimpliVity finally exited stealth mode today, but has been around since the end of 2009, with development starting in earnest in 2010.

The name of the hardware storage appliance which SimpliVity have just announced is the OmniCube. Having asked Jesse to describe the features of the appliance, he listed the following:

Continue reading

EMC Isilon – OneFS Mavericks Release Overview

EMC IsilonEMC Isilon are providing even further vSphere integration features in their upcoming ‘Mavericks’ release of their OneFS operating system. This is great to see. The integration is in the area of vSphere APIs, both for Array Integration (VAAI) &  Storage Awareness (VASA).

Let’s have a look at the VAAI enhancements first.

1. VAAI NAS integration

  • Full File Clone/NFS File Copy – The Full File Clone primitive calls the storage array’s replication facility. In Isilon’s case, a writeable snapshot of the file is created, saving space since it does not need to clone the whole VM’s disk. This is very similar to the VAAI block primitive XCOPY. One difference I do need to call out however between block and NAS primitives is that the NAS Full File Clone primitive will only work with VMs that are not running. In other words, Storage vMotion operations do not use the Full File Clone primitive at this time, unlike Storage vMotion on block devices which support VAAI. I want to highlight that this is not a limitation in Isilon’s implementation; rather it is a limitation on the vSphere side. It is definitely something I want to see in a future implementation of VAAI.
  • NFS Extended Stats – With NFS, vSphere only gets generic information about space consumption on Thin Provisioned datastores. The full details around the amount of space that is being consumed by an actual file on an NFS datastore at the back-end is not visible. This can lead to some space-management administration overhead as vSphere administrators may need to contact the storage admin for detailed information. In vSphere 5, all extended file and filesystem information are available via this primitive. For example, how much actual space is being consumed by a VMDK on the back-end can now be retrieved.
  • NFS Reserve Space – In earlier versions of vSphere, there was no way for NFS datastores to create the equivalent of an “eager-zeroed thick” VMDK. In vSphere 5, with VAAI NAS support, you now have the ability to reserve the entire space for a VMDK on an NFS datastore with this Reserve Space primitive.

These primitives, of course, require the EMC Isilon VAAI NAS plugin, but this is easily installed via VUM, the VMware Update Manager. After watching some of the tests, the improvement is significant. An offline clone operation of a 120GB VM took about 7 minutes 15 seconds without VAAI. With VAAI, it took 1 minute and 29 seconds. This was almost 5 times faster. Nice!

2. VASA

vSphere Storage APIs for Storage Awareness, commonly referred to as VASA, is a set of APIs that permits storage arrays to integrate with vCenter for management functionality.

Isilon are now surfacing up a bunch of device capabilities with VASA. These are now visible in the vSphere client when examining datastores.

Capability

Description

ARCHIVE Datastore resides on Isilon NL-series hardware
CAPACITY Datastore resides on Isilon X-Series hardware
HYBRID Datastore resides on a mixed Isilon hardware configuration
INVALID Datastore resides on a mixed Isilon hardware configuration
PERFORMANCE Datastore resides on Isilon S-Series hardware or SSD accelerated storage
ULTRA_PERFORMANCE Datastore resides on Isilon S-Series hardware with SSD acceleration
UNKNOWN The Storage Capability for this object is Unknown

This is great to see. Isilon customers who deploy the VASA plugin along with upgrading to the Mavericks release can now reap the full benefits of VMware’s Profile Driven Storage feature. What this means is that deployments of VMs will always be error free, allowing you to select the correct datastore for your VM each & every time. The other benefit is that you can constantly check the compliance state of your VMs storage throughout its life-cycle (e.g. detect if someone inadvertently migrated to a lower tier of backing storage). You can learn more about Storage Profile but this blog post I did on the vSphere Storage Blog.

We don’t have enough vendors doing offloading with VAAI NAS, so it is a welcome sign to see Isilon introduce this. And I certainly like the VASA capability descriptions that they are surfacing up – I think this make it nice and clear to Isilon customers what sort of device(s) are backing their respective datastores.

EMC are a diamond sponsor at this years VMworld 2012 in San Francisco. I’m sure Jay, James and the rest of the Isilon team would be delighted to show you these new features. You’ll find those guys at booth 1203.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan