In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs. However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.
Another hyper-converged storage company has just emerged out of stealth. Last week I had the opportunity to catch up with the team from SpringPath (formerly StorVisor), based in Silicon Valley. The company has a bunch of ex-VMware folks on-board, such as Mallik Mahalingam and Krishna Yadappanavar. Mallik and Krishna were both involved in a number of I/O related initiatives during their time at VMware. Let’s take a closer look at their new hyper-converged storage product.
The embargo on what’s new in vSphere 6.0 has now been lifted, so we can now start to discuss publicly about new features and functionality. For the last number of months, I’ve been heavily involved in preparing for the Virtual SAN launch. What follows is a brief description of what I find to be the most interesting and exciting of the upcoming features in Virtual SAN 6.0. Later on, I will be following up with more in-depth blog posts on the new features and functionality.
I was involved in an interesting case recently. It was interesting because the customer was running an 8 node cluster, 4 disk groups per host and 5 x ~900GB hard disks per disk group which should have provided somewhere in the region of 150TB of storage capacity (with a little overhead for metadata). But after some maintenance tasks, the customer was seeing only 100TB approximately on the VSAN datastore.
This was a little strange since the VSAN status in the vSphere web client was showing all 160 disks claimed by VSAN, yet the capacity of the VSAN datastore did not reflect this. So what could cause this behaviour?
Pretty soon I’ll be heading out on the road to talk at various VMUGs about our first 6 months with VSAN, VMware’s Virtual SAN product. Regular readers will need no introduction to VSAN, and as was mentioned at VMworld this year, we’re gearing up for our next major release. With that in mind, I thought it might be useful to go back over the last 6 months, with a look at some successes, some design decisions you might have to make, what are the available troubleshooting tools, some common gotchas (all those things that will help you have a successful Proof of Concept – POC – with VSAN) and then a quick view at some futures.
Whilst at VMworld 2014, I had the opportunity to catch up with the Nexenta team who have been working on a very interesting project with VMware’s Virtual SAN (VSAN). The Nexenta Connect for VSAN product, running on top of VSAN, is designed to provide file services, which allows VSAN to not only store your virtual machines, but also to provide SMB and NFS shares for those virtual machines. I caught up with Michael Letschin and Gijsbert Janssen van Doorn of the Nexenta team to learn more and get a tech preview of the product.
There was a very interesting discussion on our internal forums here at VMware over the past week. One of our guys had built out a VSAN cluster, and everything looked good. However on attempting to deploy a virtual machine on the VSAN datastore, he kept hitting an error which reported that it “cannot complete file creation operation”. As I said, everything looked healthy. The cluster formed correctly, there were no network partitions and the network status was normal. So what could be the problem?