VSAN 6.0 Part 10 – 10% cache recommendation for AF-VSAN

AF-VSAN-BWith the release of VSAN 6.0, and the new all-flash configuration (AF-VSAN), I have received a number of queries around our 10% cache recommendation. The main query is, since AF-VSAN no longer requires a read cache, can we get away with a smaller write cache/buffer size?

Before getting into the cache sizing, it is probably worth beginning this post with an explanation about the caching algorithm changes between version 5.5 and 6.0. In VSAN 5.5, which came as a hybrid configuration only with a mixture of flash and spinning disk, cache behaved as both a write buffer (30%) and read cache (70%). If a read request was not satisfied by the cache, in other words there was a read cache miss, then the data block was retrieved from the capacity layer. This was an expensive operation, especially in terms of latency, so the guideline was to keep your working set in cache as much as possible. Since the majority of virtualized applications have a working set somewhere in the region of 10%, this was where the cache size recommendation of 10% came from. With hybrid, there is regular destaging of data blocks from write cache to spinning disk. This is a proximal algorithm, which looks to destage data blocks that are contiguous (adjacent to one another). This speeds up the destaging operations.

Continue reading

vSphere 6.0 Storage Features Part 8: VAAI UNMAP changes

recycle2A few weeks, my good pal Cody Hosterman over at Pure Storage was experimenting with VAAI and discovered that he could successfully UNMAP blocks (reclaim) directly from a Guest OS in vSphere 6.0. VAAI are the vSphere APIs for Array Integration. Cody wrote about his findings here. Effectively, if you have deleted files within a Guest OS, and your VM is thinly provisioned, you can tell the array through this VAAI primitive that you are no longer using these blocks. This allows the array to reclaim them for other uses. I know a lot of you have been waiting for this functionality for some time. However Cody had a bunch of questions and reached out to me to see if I could provide some answers. After conversing with a number of engineers and product managers here at VMware, here are some of the answers to the questions that Cody asked.

Continue reading

VSAN 6.0 Part 4 – All-Flash VSAN Capacity Tier Considerations

In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs.  However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.

Continue reading

Virtual Volumes – A closer look at Storage Containers

VVolsThere are a couple of key concepts to understanding Virtual Volumes (or VVols for short). VVols is one of the key new storage features in vSphere 6.0. You can get an overview of VVols from this post. The first key concept is VASA – vSphere APIs for Storage Awareness. I wrote about the initial release of VASA way back in the vSphere 5.0 launch. VASA has changed significantly to support VVols, with the introduction of version 2.0 in vSphere 6.0, but that is a topic for another day. Another key feature is the concept of a Protocol Endpoint, a logical I/O proxy presented to a host to communicate with Virtual Volumes. My good pal Duncan writes about some considerations with PEs and queue depths here. This again is a topic for a deeper conversation, but not today. Today, I want to talk about a third major concept, a Storage Container.

Continue reading

A closer look at SpringPath

springpathAnother hyper-converged storage company has just emerged out of stealth. Last week I had the opportunity to catch up with the team from SpringPath (formerly StorVisor), based in Silicon Valley. The company has a bunch of ex-VMware folks on-board, such as Mallik Mahalingam and Krishna Yadappanavar. Mallik and Krishna were both involved in a number of I/O related initiatives during their time at VMware. Let’s take a closer look at their new hyper-converged storage product.

Continue reading

Tips for a successful Virtual SAN (VSAN) Proof Of Concept (POC)

vsan-vmware-virtual-san-boxPretty soon I’ll be heading out on the road to talk at various VMUGs about our first 6 months with VSAN, VMware’s Virtual SAN product. Regular readers will need no introduction to VSAN, and as was mentioned at VMworld this year, we’re gearing up for our next major release. With that in mind, I thought it might be useful to go back over the last 6 months, with a look at some successes, some design decisions you might have to make, what are the available troubleshooting tools, some common gotchas (all those things that will help you have a successful Proof of Concept – POC – with VSAN) and then a quick view at some futures.

Continue reading

A closer look at Maxta

maxtaMaxta are another storage vendor that I managed to get talking to at this years’ VMworld conference in San Francisco. Although they were present at last year’s VMworld, they only announced themselves in earnest last November (11/12/13) with the release of the Maxta Storage Platform (MxSP). I spent some time with Kiran Sreenivasamurthy, Director of PM & PMM at Maxta, and he was very open in sharing details on the Maxta product.

If you read the blurb on Maxta on the VMworld sponsor/exhibitor list, it states that they eliminate the need for storage arrays, provide enterprise class data services and has full virtualization integration from UI to data management.

So on the face of it, Maxta is another converged solution, similar in many respects to VMware’s own Virtual SAN, Nutanix, Simplivity, etc. So what makes Maxta so different? Kiran shared his views with me here.

Continue reading