A couple of months back, I wrote a short article on Rubrik. They were just coming out of stealth mode and had started an early access program. Since they had not officially launched, there wasn’t a lot that I was allowed to say about the company, other than give a high level overview. As they have now officially launched their r300 series of products, along with news of a massive $41 million Series B of funding, I can now share some additional details about their products and technology. Just to recap on what Rubrik do, they are offering a converged…
With the release of VSAN 6.0, and the new all-flash configuration (AF-VSAN), I have received a number of queries around our 10% cache recommendation. The main query is, since AF-VSAN no longer requires a read cache, can we get away with a smaller write cache/buffer size? Before getting into the cache sizing, it is probably worth beginning this post with an explanation about the caching algorithm changes between version 5.5 and 6.0. In VSAN 5.5, which came as a hybrid configuration only with a mixture of flash and spinning disk, cache behaved as both a write buffer (30%) and read…
Today VMware announces the Virtual SAN 6.0 Health Check Plugin, a feature that will check your Virtual SAN configuration, both proactively and re-actively, and highlight any abnormal conditions found in the cluster. This is available to all our VSAN customers right now. Not only does it check the health of the cluster, but it also checks the state of the network, host connectivity, physical disk status, and underlying virtual machine object state. This is a great tool for ensuring that an initial deployment of VSAN or proof-of-concept has been rolled out successful, giving you confidence in your VSAN deployment. It…
This is another nice new feature of Virtual SAN 6.0. It basically is a directive to VSAN to start re-balancing components belonging to virtual machine objects around all the hosts and all the disks in the cluster. Why might you want to do this? Well, it’s very simple. As VMs are deployed on the VSAN datastore, there are algorithms in place to place those components across the cluster in a balanced fashion. But what if a hosts was placed into maintenance mode, and you requested that the data on the host be evacuated prior to entering maintenance mode, and now…
One of the really nice new features of VSAN 6.0 is fault domains. Previously, there was very little control over where VSAN placed virtual machine components. In order to protect against something like a rack failure, you may have had to use a very high NumberOfFailuresToTolerate value, resulting in multiple copies of the VM data dispersed around the cluster. With VSAN 6.0, this is no longer a concern as hosts participating in the VSAN Cluster can be placed in different failure domains. This means that component placement will take place across failure domains and not just across hosts. Let’s look…
Before I begin, this isn’t really a feature of VSAN so to speak. In vSphere 6.0, you can also blink LEDs on disk drives without VSAN deployed. However, because of the scale up and scale out features in VSAN 6.0, where you can have very many disk drives and very many ESXi hosts, being able to identify a drive for replacement becomes very important. So this is obviously a useful feature. And of course I wanted to test it out, see how it works, etc. In my 4 node cluster, I started to test this feature on some disks in…
There is a subtle difference in maintenance mode behaviours between VSAN version 5.5 and VSAN version 6.0. In Virtual SAN version 5.5, when a host is placed into maintenance mode with the “Ensure Accessibility” option, the host is maintenance mode continues to contribute its storage towards the VSAN datastore. In other words, any VMs that had components stored on this host still remained fully compliance with all of the components available. In VSAN 6.0, this behaviour changed. Now, when a host is placed into maintenance mode, it no longer contributes storage to the VSAN datastore, and any components that reside…