A closer look at Maxta

maxtaMaxta are another storage vendor that I managed to get talking to at this years’ VMworld conference in San Francisco. Although they were present at last year’s VMworld, they only announced themselves in earnest last November (11/12/13) with the release of the Maxta Storage Platform (MxSP). I spent some time with Kiran Sreenivasamurthy, Director of PM & PMM at Maxta, and he was very open in sharing details on the Maxta product.

If you read the blurb on Maxta on the VMworld sponsor/exhibitor list, it states that they eliminate the need for storage arrays, provide enterprise class data services and has full virtualization integration from UI to data management.

So on the face of it, Maxta is another converged solution, similar in many respects to VMware’s own Virtual SAN, Nutanix, Simplivity, etc. So what makes Maxta so different? Kiran shared his views with me here.

 Architecture

Maxta, as I mentioned, is converged compute and storage. It uses a combination of Solid State Drives (SSD) and Hard Disk Drives (HDD). The drives are consumed by a virtual appliance deployed on each ESXi host in the cluster. This in turn creates a single global namespace/NFS datastore.The NFS datastore is then presented back to the ESXi hosts participating in the cluster.  There is no way presently to create multiple different namespaces in a cluster; the storage is a single NFS datastore. Capacity is made up of spinning disk (HDD); flash does not contribute towards datastore capacity, but is used for read, write-back and metadata caching for delivering I/O acceleration.

There is also a minimum requirement to have 3 nodes required in the cluster, as Maxta distributes a copy of a VM’s disk to another host in the cluster (RAID-1). They also have the concept of a witness/quorum – thus the need for 3 hosts. They can scale to the maximum number of hosts that can be supported in a vSphere cluster which is currently 32 hosts (i.e. there is a logical relationship between the size of a Maxta cluster and a vSphere cluster). I wasn’t sure whether the cluster could be scaled up or out on-the-fly. I know that this question was asked before, and I didn’t know whether or not this is a “configure-once” solution, or whether it can be tuned with more disk or even more hosts after the deployment. I posed this question to Kiran who replied as follows:

The cluster can be scaled non-disruptively by adding additional nodes to the cluster. The process is very simple. Customer has to add the vSphere host to vCenter cluster and then within the Maxta tab add the node. This will automatically install Maxta VM on the node and then expand the datastore with the additional capacity from the newly added node.

In addition to scaling capacity by adding new hyper-converged compute/storage node customers can also non-disruptively scale capacity by:

1. Replacing existing lower capacity drives with higher capacity drives (example 2TB drives with 4TB drives)
2. Adding additional drives to empty drive bays on a server

In all 3 scenarios compute and storage resources are scaled non-disruptively.

Maxta’s smarts are in the appliance that is shipped as a OVF and deployed as a VM on every host in the cluster. Each appliance runs a Centos distribution and consumes 4 vCPUs and 8GB memory out of the box, but the configuration can be changed if necessary.

Maxta support 1Gb and 10Gb interconnect for their cluster deployments. One assumes that if you are going to go with 1Gb, you’d need to dedicate this to Maxta communication, whereas 10Gb could be shared with other networks.

This verifies Maxta’s first claim – no need for a storage array with Maxta’s MxSP. But you will need to consider local SSDs, local HDDs, local storage controllers and network adapters. On reading previous posts on Maxta, I was under the impression that Maxta supported consumer grade SSDs. Kiran stated that this is not the case. They only support Enterprise MLC (eMLC) SSDs. They do not support cMLC SSDs.  Based on experience, I wouldn’t go with the consumer grade stuff either, if you want the best out of a storage solution like this. Consider proper enterprise components for the best experience, especially when you are trying to doing rebuilds AND virtual machine I/O at the same time. You’ll also need to consider the resources required by the appliance on each of the ESXi hosts. Kiran did state that Maxta are hardware agnostic, but that they do have some reference architectures available, created with certain hardware vendors.

I/O Distribution

I asked Kiran for details on how Maxta protect Virtual Machines. He replied as follows:

Maxta supports 2 types of protection.

Data protection by synchronously replicating data across 2 servers in the cluster:
Maxta storage platform automatically makes 2 copies of data on 2 different servers in the cluster. The first copy of data is created on the server where the VM is running, delivering data locality, and the second copy is created on a second server within the cluster. If the VMs moves (e.g. vMotion) then customers have a choice of moving data to provide data locality. In addition to delivering data protection due to drive failures this also provides protection against server failures.

Data protection by enabling local copies:
In addition Maxta also delivers an option of making local copies by enabling the option called “Enable local copies” at deployment time. Enabling this option, Maxta storage platform will make additional full copy of data on the 2 servers where the copies of data exist. For example: If the two copies of data are on server 1 and server 4, by enabling local copies we make additional full copy on server 1 and server 4.

Local Copies are supported with Maxta Software and also by leveraging RAID card. With Maxta Software we create a RAID 10 configuration. With RAID card we support different RAID configurations (RAID 5, RAID 6).

Enabling local copies will deliver:

  1. Support data availability even with any 3 drive failures. This is a key differentiation compared to most storage platforms.
  2. Support faster rebuild time since we do not have to go over the n/w to rebuild drives.  

Given this design, all writes therefore go to the SSD on the local ESXi host and then to the node where the second copy of the data resides. As mentioned earlier, the SSD serves 3 different purposes in this configuration; it serves as the read cache, a write back cache and also hold the metadata.

One other interesting point is the fact that some hosts can work as compute-only nodes. In other words, they do not have to contribute storage but can still run VM workloads. One would have to conclude that in this case, the data locality is not enforced and that both copies of the data will be on remote hosts.

 Data Services

Maxta already appear to have a nice set of data services. They provide Thin Provisioning, Cloning, Snapshots, Deduplication and Compression. The cloning and snapshotting has Maxta intellectual property to speed up creation and consume less space. This smarts also prevents the snapshot from impacting a VM’s I/O performance. Kiran wished to highlight the fact that all of these services are available by default in a single SKU. Here is a look at Maxta’s C# plugin for snapshot policy management:

Snapshot policyKiran also mentioned that deduplication can be disabled if necessary as not all applications may benefit from it. He stated that they enforce data integrity through the use of ‘strong checksums’ and that they also run a ‘scrubber’ process, the purpose of which is to correct any inconsistencies. However there is no native replication right now. One assumes however that DR could be implemented using VMware technologies such as vSphere Replication and DR orchestration could be achieved with Site Recover Manager.

Another service that they offer is the ability to stretch a Maxta cluster across sites. This is achievable as Maxta have the ability to create fault domains, ensuring that both copies of the data are not created on the same site, but rather across sites (same goes for the witness). This means that if a rack/site goes down, there is still a copy of the data available. Kiran confirmed that vSphere HA can be used to restart VMs in such a scenario.

This is a very good range of data services indeed. I guess there are still a few questions around replication, stretched cluster behaviour and Disaster/Recovery. However, like many other storage vendors, VMware technologies could possibly be utilized to complete the picture here.

Finally, in the case of rebuilds, when perhaps a host or disk has failed, Maxta employ a “throttling mechanism” to ensure that virtual machine I/O is not unduly impacted whilst the rebuild of components is taking place. However, like I stated earlier, I wonder what sort of performance impact has on virtual machine I/O when components are being rebuilt after a host or disk failure? This is where having a correctly sized and suitable SSD is critical.

 vSphere Integration

Kiran told me that they have a C# plugin for Maxta management at the moment. We saw a screenshot earlier related to snapshot policies. There is no web client plugin. But all management of the Maxta system can be done from this interface – no additional management interface is needed. Here is a look at the Maxta dashboard:

DashboardKiran also mentioned that they are looking at VAAI-NAS, the vSphere APIs for Array Integration from NAS/NFS storage.

Cost

Cost is done on a per TB capacity of the NFS datastore if I understand correctly. I believe that you can have as much flash as you want in the cluster, and implement data copies or not; these do not have any impact on the cost. Cost is simply based on NFS datastore size.

Summary

This does appear to be a very nice converged solution, with a lot of architecture similarities with other players in this area, including VMware. Converged and Hyper-Converged solutions are certainly a hot topic at the moment.

3 Replies to “A closer look at Maxta”

  1. Nice write-up. Any indication of $$/GB or possibility of network saturation concerns/remediation?

    1. Maxta mitigates network saturation concerns by co-locating VMs with their data and optimizing network resources utilization.
      That said, like any other hyper-converged solution, the user should ensure that allocated network bandwidth for storage operations exceeds the expected Write activity in the cluster.

      Maxta’s pricing is based on RAW capacity that is part of the Maxta datastore. All the software features are included in the base product offering. The list price of the product start at $12,000 for 4TBs with a cost per TB after that with pricing depending on the capacity.

      Please contact Maxta directly if you would like to further discuss the multiple ways that Maxta addresses network saturation issues or for additional pricing information.

Comments are closed.