This question has come up on a number of occasions in the past. It usually comes up when there is a question about scalability and the number of disk drives that can be supported on a single host that is participating in Virtual SAN. The configurations maximums for a Virtual SAN node is 5 disk groups, with each disk group containing 1 flash device and up to 7 capacity devices (these capacity devices are magnetic disks in hybrid configurations or flash devices in all-flash configurations). Now the inevitable next question is how is this configuration implemented on a physical server.…
I already wrote an article on the NexentaConnect for VSAN product after seeing it in action at VMworld last year. More recently, I had the opportunity to play with it in earnest. Rather than giving you the whole low-down on NexentaConnect, instead I will use this post to show the steps involved in presenting a file share built by NexentaConnect to a VM. In this case, the VM and the file share both reside on Virtual SAN. I will also show you how to simply revert to a point-in-time snapshot of the file share using NexentaConnect. To answer the common…
I’ve had an opportunity recently to get some hands-on with HyTrust’s Data Control product to do some data encryption of virtual machine disks in my Virtual SAN 6.0 environment. I won’t deep dive into all of the “bells and whistle” details about HyTrust – my good buddy Rawlinson has already done a tremendous job detailing that in this blog post. Instead I am going to go through a step-by-step example of how to use HyTrust and show how it prevents your virtual machine disk from being snooped. In my case, I am encrypting virtual machine disks from VMs that are…
Virtual SAN already has a number of features and extensions for performance monitoring and real-time diagnostics and troubleshooting. In particular, there is VSAN Observer, which is included as part of the Ruby vSphere Console (RVC). Another new feature is the Health Check Plugin, which was recently launched for VSAN 6.0. However, a lot of our VSAN customers are already using vRealize Operations Manager, and they have asked if this could be extended to VSAN, allowing them us to use a “single pane of glass” for their infrastructure monitoring. That’s just what we have done, and the beta for the vROps…
A short post again today. For those of you who are considering evaluating Virtual SAN, our friends over at the VMware User Group (VMUG) are giving you the opportunity to trial VSAN for 6 months. This offer is only available to VMUG members, but joining VMUG is free. And really, if you are not already a member of your local VMUG, shame on you. This is a great way to get hands-on experience with VSAN. What are you waiting for? Click here to get your six month trial of VSAN. On the topic of VMUGs, I will be presenting on…
I had a query recently from a partner who was deploying VMware Horizon View 6.1 on top of an all-flash VSAN 6.0. They had done all the due diligence with configuring the AF-VSAN appropriately, marking certain flash devices as capacity devices, and so on. The configuration looked something like this: The they went ahead and deployed Horizon View 6.1, which they had done many times before on hybrid configurations. They were able to successfully deploy full clone pools on the AF-VSAN, but hit a strange issue when deploying linked clone pools (floating/dedicated). The clone virtual machine operation would fail with…
With the release of VSAN 6.0, and the new all-flash configuration (AF-VSAN), I have received a number of queries around our 10% cache recommendation. The main query is, since AF-VSAN no longer requires a read cache, can we get away with a smaller write cache/buffer size? Before getting into the cache sizing, it is probably worth beginning this post with an explanation about the caching algorithm changes between version 5.5 and 6.0. In VSAN 5.5, which came as a hybrid configuration only with a mixture of flash and spinning disk, cache behaved as both a write buffer (30%) and read…