Following on from my recent post on how to reclaim disks that were previously used by VSAN, I was asked how one can remove a disk group from a host that is participating in a VSAN cluster. This is quite straight forward, but there is one minor caveat and it relates to whether the VSAN cluster has been setup in Automatic Mode or Manual Mode. If you want to learn more about the behaviour of the different modes, you can read up on it here.
A number of customers have raised this question. How do you reclaim disks which were once used by VSAN but you now wish to use these disks for other purposes? Well, first off, if you are using some of the later builds of VSAN and you place the host into maintenance mode and remove the disk group from the host, this will automatically remove the partitions from the disks and you are good to go with reusing these disks for some other purpose. However, if you do something such as reinstall ESXi on the host but do not go through the appropriate VSAN clean up steps first, then there may still be VSAN partition information on the disks. So how do you go about cleaning up these disks?
For those of you participating in the VMware Virtual SAN (VSAN) beta, this is a reminder that there is a VSAN Design & Sizing Guide available on the community forum. It is part of the Virtual SAN (VSAN) Proof of Concept (POC) Kit, and can be found by clicking this link here. The guide has recently been updated to include some Host Memory Requirements as we got this query from a number of customers participating in the beta. The actual host memory requirement directly related to the number of physical disks in the host and the number of disk groups configured on the host. If you want to know more about disk groups, have a read of an article that I wrote about disk groups on the vSphere storage blog.
I’ve been having some interesting discussions with my friends over at NetApp recently. I wanted to learn more about their new clustered Data ONTAP 8.2 features and its new scale-out functionality. In the storage array world, traditional scale-up mechanisms usually involved either replacing disk drives with faster/newer models or replacing old array controllers with newer controllers. In worst case scenarios, fork lift upgrades are required to do a technology refresh of your array. Another approach, scale-out, is fast becoming the accepted way of handling storage requirements going forward. Scale out storage is now big news. With scale-out, you simply add additional resources to your already existing shared storage pool.
Over the past year I have been to a number of VMUGs (VMware User Group) meetings and have sat in on some of the NetApp sessions on their clustered Data ONTAP release. NetApp have also realized that the demand is there for scale-out, and they have introduced their very own unified scale-out storage solution called clustered Data ONTAP. Basically, this allows you to take a bunch of different NetApp storage array models and cluster them together to provide a single, unified and virtualized share storage pool. Using clustered Data ONTAP 8.2, NetApp customers can now increase scalability using a scale-out rather than a scale-up approach. Let’s look at clustered Data ONTAP and some of the new features it brings in more detail. Continue reading
Thanks to our friends at EMC, I was recently given the chance to attend a session on EMC’s new storage acquisition, ScaleIO. This acquisition generated a lot of interest (and perhaps some confusion) as VMware Virtual SAN product seemed to play in that same storage area. My good friend Chad Sakac over at EMC wrote about this some 6 months ago in his evocatively titled blog post VSAN vs. ScaleIO fight! Chad explains where, in his opinion, each product can be positioned and how EMC/VMware customers have a choice of storage options. His article is definitely worth a read. I wanted to learn more about the ScaleIO product and share this with you.
I recently had the pleasure of chatting with Jesse St. Laurent, Product Director at a new storage startup called SimpliVity. SimpliVity finally exited stealth mode today, but has been around since the end of 2009, with development starting in earnest in 2010.
The name of the hardware storage appliance which SimpliVity have just announced is the OmniCube. Having asked Jesse to describe the features of the appliance, he listed the following:
- The OmniCube is a 2U hardware Storage Appliance which has a pre-installed & pre-configured ESXi hypervisor. The appliances are deployed in configurations of 2 or more nodes and use a combination of SimpliVity software and PCIe accelerator cards, both of which are intellectual property (IP) of SimpliVity.
- It deduplicates & compresses all data at inception – there is no need for a third-party appliance/component to deliver this. Jesse stated that they can achieve 1.5:1 for both dedupe and 1:1.5 for compression, but said he was being very conservative with this estimate.
- The appliance provides space efficient snapshots for backup & other purposes which are VM-centric. Many backup and replication products work off of a whole datastore – SimpliVity works at the VM level.
- The appliance supports a combination of HDD & SSD for cache & performance reasons.
- There is a High Availability feature across multiple OmniCubes located in the same DC.
- There is replication across OmniCubes in different datacenters for BC/DR purposes.
- The appliance is based on a scale out architecture so customers can start out small-scale and then grow as their performance and capacity requirements grow.
- The datastore created by SimpliVity is NFS – therefore this storage can also be shared with other hosts and VMs in the infrastructure.
- The storage is VM-centric in so far as the deployment of VMs is policy driven (i.e. backup/snapshots policy, DR/Replication policy and supports per VM failover). Many traditional approaches require customers to snapshot or replicate complete datastores when in fact you may only be interested in one or more of the VMs on the datastore. SimpliVity have the ability to snap and replicate on a per-VM basis. I asked about whether SimpliVity has a policy to define QoS for the VMs (both for network & storage), and although the appliance is plumbed-in for these sorts of policies under the covers, it is not yet exposed so will not available at GA.
SimpliVity’s OmniCube is powered by Omnistack: the software (SVT in the above diagram) and the PCIe accelerator card. The Omnistack is designed to work with your typical DAS server. One of the other nice features of the appliance is that it is cloud ready – SimpliVity support their Omnistack (without the hardware acceleration) running in a public cloud. At the time of writing, they are only supporting it on Amazon’s EC2, . What this does mean however is that you can have DR to the Cloud pretty much out of the box. The other neat thing is that you can clone and backup across datacenters to any Omnistack instance, and restore from any instance too, including the one based in the cloud.
I put the following questions to Jesse.
- Q. What is SimpliVity’s target market?
- A. Jesse expected customers to start using SimpliVity storage for the tier2 applications, but added that their storage is designed to run any application running on VMware today. SimpliVity feels that current storage offerings are either too complex or too expensive. As I have not yet seen the product in action or any pricing from SimpliVity, I guess time will tell whether SimpliVity is less complex or less expensive than the competition.
- Q. How does SimpliVity differentiate itself from the many other storage appliances operating in this space?
- A. Jesse stated that he believed that the SimpliVity appliance was feature complete. Jesse stated that a lot of customers were faced with a proliferation of appliances to provide the complete feature set that SimpliVity provides, such as dedupe/compression appliances and cloud gateways.
- Q. What sort of vSphere integration is there?
- A. SimpliVity have a plugin to vCenter which allows the SimpliVity appliance(s) to be managed from the vSphere client. All nodes can be managed (same datacenter, different datacenter, nodes in the cloud) from the same management interface.
- Q. What other components are required for deployment?
- A. The is a PCIe ‘accelerator’ card required on the hosts. This Omnicube accelerator card is SimpliVity Intellectual Property. There is also a requirement to have 10GbE connectivity between the hosts, but in small configurations, SimpliVity will support direct connect.
It does seems like a very nice solution and I’m looking forward to seeing a live demo at VMworld 2012. SimpliVity are a gold sponsor at VMworld 2012 this year and you will find them at booth 1117.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
Regular readers of my VMware Storage Blog will be no stranger to Nimble Storage. I’ve blogged about them on a number of occasions. I first came across them at a user group meeting in the UK & I also wrote an article about them when they certified on VMware’s Rapid Desktop Program for VDI.
Nimble Storage have been in touch with me again to share details about their new 2.0 storage architecture. After a very interesting and informative chat with Wen Yu of Nimble, I’m delighted to be able to share these new enhancements with you, in this first post on my new blog site.
Nimble Storage’s new enhancements can be categorized into two areas. The first of these is a new scale out architecture and the second is further integration with vSphere.
Scale to Fit
Scale to Fit architecture is how Nimble Storage describe their new elastic scaling feature. It basically allows customers to scale out their storage on a particular dimension, be it capacity or performance. This new architecture allows customers to start with a small footprint, and then to scale performance and capacity. This can be done without having to migrate any data and without any Virtual Machine/application downtime. The great advantage of this of course is that it avoids over-provisioning of storage up front, keeping initial costs down. When additional performance or capacity is needed, customers only need to grow on that dimension. This means that customers don’t pay for additional performance if they only need capacity, and vice-versa.
vSphere Integration Features
There are 3 new vSphere integration features to call out in this new release.
- Nimble Storage have a new Storage Replication Adapter (SRA) for integrating with VMware Site Recovery Manager (SRM). Business Continuance and Disaster Recovery are essential features for any enterprise class storage array, and it is great to see that Nimble now offer full integration with VMware’s BC/DR flagship product.
- There are a number of additional VAAI offload primitives supported. The first of these is Hardware Assisted Locking (ATS) which enables ESXi hosts to offload VMFS volume locks to the Nimble storage array. The second is the UNMAP primitive, which enables VMFS volumes built on thin provisioned disks to do space reclamation after storage vMotion or VM deletion. If I remember correctly from previous conversations with Nimble, they already support the WRITE_SAME primitive.
- This last feature is the one I am most excited about. Nimble Storage now offer their own Path Selection Plugin (PSP) into the Pluggable Storage Architecture of the VMkernel. This optimized multipathing plugin will load balance I/O, and provide linear performance scalability with a single Nimble storage array or multiple storage arrays in a scale-out cluster. The PSP is called Nimble_PSP_Directed.
Nimble Storage are a sponsor at the VMworld 2012. You’ll find them at booth 306 at the US conference this year.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan