Datrium are a new storage company who only recently came out of stealth. They are one of the companies that I really wanted to catch up with at VMworld 2015. They have a lot of well-respected individuals on their team, including Boris Weissman, who was a principal engineer at VMware and Brian Biles of Data Domain fame. They also count of Diane Green, founder of VMware, among their investors. So there is a significant track record in both storage and virtualization at the company.
This question has come up on a number of occasions in the past. It usually comes up when there is a question about scalability and the number of disk drives that can be supported on a single host that is participating in Virtual SAN. The configurations maximums for a Virtual SAN node is 5 disk groups, with each disk group containing 1 flash device and up to 7 capacity devices (these capacity devices are magnetic disks in hybrid configurations or flash devices in all-flash configurations). Now the inevitable next question is how is this configuration implemented on a physical server.…
Regular readers will know that I normally blog about the technical aspects of storage, as opposed to doing opinion pieces. However there have been a number of articles published recently questioning the value of VMware’s Virtual Volumes, commonly referred to as VVols. In general, the pieces I have read ask whether or not VVols (or to be more accurate, per-VM granularity feature of VVols) adds value when NFS is already doing per-VM granularity in the form of files. The point that was missed in these pieces is that VVols is so much more than per-VM granularity. I’ve just come back…
This is another nice new feature of Virtual SAN 6.0. It basically is a directive to VSAN to start re-balancing components belonging to virtual machine objects around all the hosts and all the disks in the cluster. Why might you want to do this? Well, it’s very simple. As VMs are deployed on the VSAN datastore, there are algorithms in place to place those components across the cluster in a balanced fashion. But what if a hosts was placed into maintenance mode, and you requested that the data on the host be evacuated prior to entering maintenance mode, and now…
There is a subtle difference in maintenance mode behaviours between VSAN version 5.5 and VSAN version 6.0. In Virtual SAN version 5.5, when a host is placed into maintenance mode with the “Ensure Accessibility” option, the host is maintenance mode continues to contribute its storage towards the VSAN datastore. In other words, any VMs that had components stored on this host still remained fully compliance with all of the components available. In VSAN 6.0, this behaviour changed. Now, when a host is placed into maintenance mode, it no longer contributes storage to the VSAN datastore, and any components that reside…
There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots,…
In Virtual SAN version 6.0, VMware introduced support for an all-flash VSAN. In other words, both the caching layer and the capacity layer could be made up of flash-based devices such as SSDs. However, the mechanism for marking some flash devices as being designated for the capacity layer, while leaving other flash devices as designated for the caching layer, is not at all intuitive at first glance. For that reason, I’ve included some steps here on how to do it.