A closer look at Infinio Accelerator 2.0

infinioI took the opportunity last week (while I was over in the Boston area) to catch up with Scott Davis. I’ve known Scott a long time, as he had various roles at VMware over a number of years. Scott is currently CTO at Infinio, a company that has developed an I/O acceleration product for virtual machines. The new version of Infinio Accelerator 2.0 released only a few weeks back, so I decided to reach out to Scott and find out about the enhancements that went into this new version.

Continue reading

vROps Management Pack for Virtual SAN

vrops-badgeVirtual SAN already has a number of features and extensions for performance monitoring and real-time diagnostics and troubleshooting. In particular, there is VSAN Observer, which is included as part of the Ruby vSphere Console (RVC). Another new feature is the Health Check Plugin, which was recently launched for VSAN 6.0. However, a lot of our VSAN customers are already using vRealize Operations Manager, and they have asked if this could be extended to VSAN, allowing them us to use a “single pane of glass” for their infrastructure monitoring. That’s just what we have done, and the beta for the vROps Management Pack for Virtual SAN is now open. You can sign up by clicking here.

Continue reading

VSAN 6.0 Part 10 – 10% cache recommendation for AF-VSAN

AF-VSAN-BWith the release of VSAN 6.0, and the new all-flash configuration (AF-VSAN), I have received a number of queries around our 10% cache recommendation. The main query is, since AF-VSAN no longer requires a read cache, can we get away with a smaller write cache/buffer size?

Before getting into the cache sizing, it is probably worth beginning this post with an explanation about the caching algorithm changes between version 5.5 and 6.0. In VSAN 5.5, which came as a hybrid configuration only with a mixture of flash and spinning disk, cache behaved as both a write buffer (30%) and read cache (70%). If a read request was not satisfied by the cache, in other words there was a read cache miss, then the data block was retrieved from the capacity layer. This was an expensive operation, especially in terms of latency, so the guideline was to keep your working set in cache as much as possible. Since the majority of virtualized applications have a working set somewhere in the region of 10%, this was where the cache size recommendation of 10% came from. With hybrid, there is regular destaging of data blocks from write cache to spinning disk. This is a proximal algorithm, which looks to destage data blocks that are contiguous (adjacent to one another). This speeds up the destaging operations.

Continue reading

vSphere 6.0 Storage Features Part 7: VAAI XCOPY improvements

The more astute of you who have already moved to vSphere 6.0, and like looking at CLI outputs, may have observed some new columns/fields in the PSA claimrules when you run the following command:

# esxcli storage core claimrule list --claimrule-class=VAAI

The new fields are as follows (slide right to view full output):

XCOPY Use Array     XCOPY Use              XCOPY Max
Reported Values     Multiple Segments      Transfer Size 
---------------     -----------------      -------------- 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0 
false                   false                  0

Continue reading

VSAN 6.0 Part 5 – new vsanSparse snapshots

There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots, speaks for itself.

This new vsanSparse snapshot format leverages features of the new (v2) on-disk format in VSAN 6.0, VirstoFS. VirstoFS is the first implementation of technology that was acquired when VMware bought a company called Virsto a number of years ago. You can get an overview of this company from this blog post I did prior to the acquisition.

Continue reading

vSphere 6.0 Storage Features Part 1: NFS v4.1

Although most of my time is dedicated to Virtual SAN (VSAN) these days, I am still very interested in the core storage features that are part of vSphere. I reached out earlier to a number of core storage product managers and engineers to find out what new and exciting features are included in vSphere 6.0. The first feature is one that I know a lot of customers are waiting on – NFS v4.1. Yes, it’s finally here.

Continue reading

VSAN Part 35 – Considerations when dynamically changing policy

I was having some discussions recently on the community forums about Virtual SAN behaviour when a VM storage policy is changed on-the-fly. This is a really nice feature of Virtual SAN whereby requirements related to availability and performance can be changed dynamically without impacting the running virtual machine. I wrote about it in the blog post here. However there are some important considerations to take into account when changing a policy on the fly like this.

Continue reading