A closer look at NetApp clustered Data ONTAP

netapp-logoI’ve been having some interesting discussions with my friends over at NetApp recently. I wanted to learn more about their new clustered Data ONTAP 8.2 features and its new scale-out functionality. In the storage array world, traditional scale-up mechanisms usually involved either replacing disk drives with faster/newer models or replacing old array controllers with newer controllers. In worst case scenarios, fork lift upgrades are required to do a technology refresh of your array. Another approach, scale-out, is fast becoming the accepted way of handling storage requirements going forward. Scale out storage is now big news. With scale-out, you simply add additional resources to your already existing shared storage pool.

Over the past year I have been to a number of VMUGs (VMware User Group) meetings and have sat in on some of the NetApp sessions on their clustered Data ONTAP release. NetApp have also realized that the demand is there for scale-out, and they have introduced their very own unified scale-out storage solution called clustered Data ONTAP. Basically, this allows you to take a bunch of different NetApp storage array models and cluster them together to provide a single, unified and virtualized share storage pool. Using clustered Data ONTAP 8.2, NetApp customers can now increase scalability using a scale-out rather than a scale-up approach. Let’s look at clustered Data ONTAP and some of the new features it brings in more detail.

Architecture

netapp-archA minimum clustered ONTAP system configuration is made up of two identical nodes, with redundant I/O paths and a set of data services (a HA pair). If one node goes down, the other can take over its storage and provide access to all the data. The cluster can currently scale out to 8 nodes for SAN protocols (iSCSI Fibre Channel, FCoE) or 24 nodes for NAS-only clusters. All nodes are paired for high availability. The supported protocols in clustered Data ONTAP 8.2 are SMB, NFS, iSCSI Fibre Channel & FCoE. The cluster may also contain different disk types (SAS, SATA, and SSD) and a mix of native NetApp FAS and V-Series controllers. V-Series controllers allows a third-party storage array to be used with a NetApp controller, so that it can run clustered ONTAP and participate in a cluster. Scale-out can be achieved without disrupting user access, that is with no need for downtime.

Clustered ONTAP can scale both vertically and horizontally via the addition of nodes and/or additional storage to the cluster.

Data Services

As you can imagine from NetApp clustered Data ONTAP has the full range of data services ranging from data replication, thin provisioning, snapshots and cloning through to deduplication and compression.

Multi-Tenancy

The SVM or Storage Virtual Machine functionality enablesnetapp-lif multi-tenancy in the cluster. With clustered Data ONTAP, NetApp has added a layer of virtualization between the clients and the network ports. Each SVM has its own set of volumes and its own logical interfaces (LIFs) which can be an IP address for SMB, NFS & iSCSI access or a WWPN (World Wide Port Name) for Fibre Channel & FCoE. ESXi hosts which connect to that LIF has access to only those volumes. There can be multiple LIFs per SVM for different requirements within the same tenant. One of the cool features of SVMs for NAS is that both LIFs and volumes can be moved non-disruptively between nodes in the cluster (the LIF maintaining its IP address as it moves). Volumes can be moved from one set of disks to another set of disks in the cluster, and LIFs can be moved between one port on one node to another port somewhere else in the cluster, and all of this can be done without causing any disruption to the user connections. For SAN SVMs, volume moves can still be done but SAN LIFs (using WWNs) do not move – these leverage MPIO/ALUA to ensure that there are always optimized paths available. Put together, these non-disruptive operations for the SVM can be leveraged by maintenance, upgrade, repair, capacity add, re-balance or complete technology refresh events that may take place in the lifetime of the cluster, and no downtime will be required.

Quality of Service

One final point to make is the new Quality of Service (QoS) feature introduced in clustered Data ONTAP 8.2. QoS allows you to limit the amount of I/O sent to an SVM, a volume, a LUN, or a file. The I/O can be limited either by the number of operations or the raw throughout and can be dynamically changed on-the-fly. I need to make further inquiries into how this QoS feature inter-operates with VMware’s Storage I/O Control feature, and indeed Storage DRS. A discussion for another time perhaps.

As I didn’t have a clustered ONTAP 8.2 system at my disposal, much of this information was provided by Joel Kaufman of NetApp. Thank you Joel. Information in this post has also leveraged the contents of the NetApp Clustered Data ONTAP TR-3982 white paper by Charlotte Brooks. Much greater detail around clustered Data ONTAP 8.2 can be found in that collateral.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage

One thought on “A closer look at NetApp clustered Data ONTAP

  1. Pingback: A closer look at NetApp clustered Data ONTAP | ...