A closer look at Primary Data

primaryDataPrimary Data were one of the storage vendors that I wanted to catch up with at VMworld 2015. I was fortunate enough to meet with Graham Smith who is their Director of Virtualization Product Management. Graham gave me a demonstration of the Primary Data product in the Solutions Exchange at VMworld, and I also had an opportunity to visit their offices in Los Altos during a recent trip to the bay area and catch up once again with Graham and Kaycee Lai, SVP of Product Management & Sales at Primary Data. Before we get into the product and solution details, I wanted to go over a brief history of the company and the problem that they are trying to solve with their DataSphere Platform.

A brief history

From what I can gather, Primary Data is a company that was created by many of the original founding team from Fusion-io (who are now owned by SanDisk). They launched the company in August 2013, but only exited stealth in November 2014. So this year was the first time they appeared at VMworld. Another point worth mentioning is that Steve Wozniak, Apple co-founder, is their Chief Scientist.

The problem that is being solved

So what is it that Primary Data are trying to solve? I spoke to Graham about this and he stated that when they asked administrators what their biggest storage pain was, the answer was invariably the management of storage growth. Customers end up having many different silos of storage for different use cases, some of the silos being completely under-utilized (over-provisioned) from a performance perspective, and other silos being under-utilized from a capacity perspective. There are then situations which require a lot of management from a storage admin perspective, such as having to provide tier1 storage “occasionally” for some VMs or applications (occasional batch jobs, Black Friday events), and making sure that dormant VMs/applications that are rarely used are not consuming tier1 storage. Having these different disparate silos, with differing capabilities, creates even further complexity for the administrator. Primary Data have a solution to avoid this silo’ing of different storage resources.

The solution

Primary Data’s DataSphere platform transparently presents storage from different vendors, media or protocol under a single, NFS global data space. This is achieved via their appliance which is placed in the customer’s data center. Depending on whether the VMDK is hot, warm or cold, the DataSphere platform will automatically and non-disruptively migrate the VMDK to the appropriate storage tier. Hot data might be moved to flash on the server, warm data could be moved to SAN or NAS storage, and cold data could be moved to the cloud. The global NFS namespace allows their software to very simply move data to maintain your SLAs between these different, disparate, storage types, be they server, local, shared and cloud storage. This is all transparent to the end-user, who simply see that the VM is compliant with the storage policy that was chosen for it (more on this shortly). Steve Wozniak is quoted as saying that “Primary Data finally makes it possible to automatically have the right data on the right storage tier at the right time, without the need to rip and replace a single storage system.”

Architectural details

The appliance provides a number of functions. One of the main roles is that it is a VASA provider. Through a number of different technique, this appliance will surface up the underlying capabilities of the various storage types that are available to the customer. An administrator can then take these capabilities and create policies for the VMs, and when that VM is being deployed, a policy is chosen. Based on this policy, a VM will be placed on the appropriate underlying storage. Here is the thing though – the admin is deploying to the global NFS namespace with this policy, but under the covers the VM could be provisioned on local DAS storage, NAS, SAN or even cloud storage. Note that this is VASA 2.0, the same version of VASA used with VVols. This is interesting for reasons that will become clearer shortly.

PdDataArchNote that the communication from vCenter is to Primary Data’s VASA provider. Even if the array has its own native VASA provider, this is not used. So where do these capabilities come from? Good question, since the VASA provider is not native. Kaycee and Graham described a mechanism that has some similarities to how we profile a datastore in vSphere Storage IO Control/Storage DRS. They look at what the array is doing using a sort of I/O injector mechanism – in particular they are looking at latency. As well as this, there is a virtual machine deployed on the ESXi hosts which is responsible for gathering client side statistics, and reporting on what the storage is doing from a host perspective. Using these two insights, Primary Data can bubble up an accurate picture of the array, e.g. bandwidth, IOPS and latency, and these can then be used to create policies around read performance, write performance and data protection.

There are two flavors of the appliance; there is the hardware version (that will be available from VARs) and there is the virtual appliance version. The recommended hardware version is a 2U, dual socket, multi-core x86 system containing 6 x NVMe flash drives. The appliance comes a HA-pair, with replication at the block level. It should be noted that the appliance is not in the I/O path. However it is designed for several hundred thousand metadata operations per second, tracking whether or not a VMDK is compliant to the policy that has been chosen for it. The virtual appliance comes as an OVA, and can be deployed on standard ESX hosts.

The appliance is also responsible for the NFS global namespace presentation but keep in mind that the VMs are deployed on the storage arrays using the native protocols. There is no limit to the number of hosts that can mount this NFS namespace.

Policies and Smart Objectives

We already mentioned how VASA surfaces up the capabilities, and how policies are created and chosen at deployment time. Primary Data will then do whatever is necessary to ensure that the VM/VMDK is always compliant, placing the VM on its correct storage. They state that the only way a VM or VMDK should be non-compliant is if there is hardware failure on the array and the policy requirements cannot be met.

However there is another feature called Smart Objectives which is quite interesting. Smart Objectives allow operations to be carried out on particular object types (such as a VMDK). For instance, once can create a smart objective that automatically/dynamically move objects between different levels of storage. For example,  if an application requires an increase in IOPS during certain times, it could be moved (non-disruptively) from spinning disk to flash storage. If a file is considered cold, and there is little or no I/O for an extended period, then move it to cheaper cloud storage. You can see how this can be beneficial for backup/archives as well as dynamically maintaining your VM’s SLA at a much lower cost. Rather than keeping these on tier1 storage, moved them somewhere less expensive and automatically move them back to tier1 storage when performance is required for those VMs/VMDKs. These rules can be implemented an acted on without user interaction if the user chooses.

The Primary Data user interface will show that a VM or VMDK was moved and why it was moved. From the vCenter, it just shows if a VM is compliant or non-compliant – depending on whether it is meeting it’s SLA.

Supported storage types

The guys said that they will have DAS, NAS and SAN storage support for GA, but they are also working on having Object storage available too. If they don’t have it for GA, they expect to have it soon after. The object storage will be Amazon S3 and Swift. Azure is coming next.


Primary Data provide their own management UI. However, once vCenter is connected to the VASA provider, the capabilities are surfaced up in vCenter so there is also the option to do create the policies via Storage Policy Based Management (SPBM) in vCenter.

PdDataSphereScreenVirtual Volumes

This is an interesting concept that the guys shared with me. Since Primary Data supports VASA 2.0, they can surface up a single Virtual Volume NFS namespace, behind which there could be a number of disparate storage arrays and storage types, e.g. object stores. They will also surface up a number of NFS Protocol Endpoints for communication. They then support vSphere making out of bound VASA API calls to bind, create VVols, delete VVols, take snapshots and clones etc, using policies. Essentially what they enable a customer to do is have VVols without having a VVol capable array. With the value of being able to do policy-driven storage from the vSphere side of things and reducing complexity, and doing so with your existing storage, this might be a great way for customers to get started.

I asked some further questions about the VVol implementation. This is how Graham described it to me: “We see each VVol as a file within our data space (VVol datastore) and they are instantiated simply as files on NAS arrays and as block ranges on DAS and SAN devices. For block devices we are essentially creating a file system across the device which allows us to provide file (VMDK) granular management with non-disruptive mobility across differing devices and tiers. When a file is accessed that is local to the node (ESXi server) the I/O is direct to the block device and this enables extremely low latency access with server side flash, NVMe etc. When we archive cold files to Object / Cloud storage they can be accessed in the same data space via some smart protocol conversions and they can be non-disruptively moved back to primary storage tiers to meet performance goals if they become active again.

General Availability

There is no exact date as yet. Primary Data have a number of customers who are doing proof-of-concepts at the moment. I guess when one of these customers want support in a production environment, then we’ll have a GA date.


Nothing could be shared at this point. However Kaycee stated that they are looking at a subscription model, and that there will be two offerings – one entry-level, one enterprise level. The main consideration will be the metadata performance, and if a virtual appliance is good enough or will a hardware appliance be necessary.