I watched a very cool demonstration this morning from the All Flash Array vendor, SolidFire. I spoke with SolidFire at the end of last year, and did a blog post about them here. One of the most interesting parts of our conversation last year was how SolidFire’s QoS feature and VMware’s Storage I/O Control (SIOC) feature could inter-operate. In a nutshell, QoS work at the datastore/volume layer whereas SIOC deals with the VM/VMDK layer. Last week, Aaron Delp and Adam Carter of SolidFire did an introduction to QoS, both on vSphere and on the SolidFire system. And they also did one of the coolest demos that I’d seen in some time, namely how they have managed to get SIOC and QoS to work in tandem.
Our team was recently asked to take a VSAN novice through a VSAN deployment, to figure out if there were any configuration gotchas. This post will share the stumbling blocks that you might encounter deploying your own VSAN environment.
Well, VSAN is finally GA today. Check out Duncan’s blog post which has lots of good links about where to get the GA bits.
In this post, I am going to address a question about the VM Home Namespace object on VSAN which has come up a number of times recently and has caused a little bit of confusion. If you’ve been following my series of Virtual SAN articles, you may recall that virtual machines deployed on a VSAN datastore are now made up as a set of objects (as opposed to the set of files that we’ve been used to traditionally). These objects may be virtual machine disk (VMDKs), snapshot deltas, VM swap space and of course the VM Home Namespace. The VM Home Namespace is where we store all the virtual machine configuration files, such as the .vmx, .log, digest files, memory snapshots, etc. Now what a number of folks have noticed is that even though they set a VM Storage Policy with various VSAN capabilities, the VM Home Namespace object does not seem to implement the policy settings when viewed via the vSphere web client. This post will aim to explain why.
I’ve been having lots of fun lately in my new role in Integration Engineering. It is also good to have someone local once again to bounce ideas off. Right now, that person is Paudie O’Riordan (although sometimes I bet he wishes I was in a different timezone ). One of the things we are currently looking at is a VSAN implementation using Fusion-io ioDrive2 cards (which our friends over at Fusion-io kindly lent us). The purpose of this post is to show the steps involved in configuring these cards on ESXi and adding them as nodes to a VSAN cluster. However, even though I am posting about it, Paudie did most of the work, so please consider following him on twitter as he’s got a lot of good vSphere/Storage knowledge to share.
Very exciting day today at VMware – Virtual SAN or VSAN has been officially announced.
If you weren’t able to attend, here are some of the interesting parts of the announcement:
- 32 node support (up from the 16 node support announced at Partner Exchange last month, and up from the 8 nodes which we supported during the beta)
- 2 million IOPS (using IOmeter 100% read, 4KB block size).
- Also 640K IOPS achieved with 70/30 read/write ratio, 4KB block size and 80% random.
- 3200 virtual machines (100 per node)
- 4.4 PB of storage (using 35 disk per host x 32 hosts per cluster)
We were also told that there was linear scaling of performance from a 4 node cluster up to a 32 node cluster.
And we saw interoperability with vSphere Replication for DR. There is also interoperability with vSphere Data Protection for backups, vMotion, DRS, HA, VMware View, etc.
It seems that we’ll GA VSAN during the week of March 10th (next week).
The one thing which wasn’t shared yet was pricing/licensing details. Guess we’ll have to wait for GA to get that info. All very exciting.
I was going to make this part 11 of my vSphere 5.5 Storage Enhancements series, but I thought that since this is such a major enhancement to storage in vSphere 5.5, I’d put a little more focus on it. vFRC, short for vSphere Flash Read Cache, is a mechanism whereby the read operations of your virtual machine are accelerated by using an SSD or a PCIe flash device to cache the disk blocks of the application running in the Guest OS of your virtual machine. Now, rather than going to magnetic disk to read a block of data, the data can be retrieved from a flash cache layer to improve performance and lower latency. This is commonly known as write-through cache, as opposed to write-back cache, where the write operation is acknowledged when the block of data enters the cache layer.
This is an interesting announcement for those of you following emerging storage technologies. We’ve been talking about flash technologies for some time now, but for the most part flash has been either an SSD or PCIe device. Well, we now have another format – DIMM-based flash storage device. And VMware now supports it.