Our team was recently asked to take a VSAN novice through a VSAN deployment, to figure out if there were any configuration gotchas. This post will share the stumbling blocks that you might encounter deploying your own VSAN environment.
1. Make sure that the disks that you use are clean and empty
We have seen this multiple times throughout the beta. If you have disks that contain old partition information, especially disks that were used with a previous VSAN beta deployment, make sure you clean them. VSAN will only claim disks that are local and empty (i.e. no partition information). With old VSAN information in the partitions, you will not be able to create a disk group and may observe your disk group configuration with errors looking something like this:
2. Disks not being claimed automatically
This is another common issue. If you set your VSAN cluster to run in Automatic mode, you would expect it to claim all of the local disks. This is true, but some SAS controllers report their disks to ESXi as non-local. Why? Because some SAS controllers allow their disks to be accessed by more than once host. The symptoms are that in the VSAN General tab, the disks in use will report 0 of X eligible (where X is the number of disks in the hosts participating in the cluster) and the Total and Free capacity of the VSAN datastore will be 0 (assuming all your hosts are configured similarly, which they should be if you are adhering to best practices). If you find that your VSAN is not claiming your disks, simply go the Disk Management section and click on ‘Select all eligible disks’.
I’ve already called this out in a separate blog post but it is worth mentioning again as it is something that came up a number of times in beta testing. VSAN uses multicast to communicate between cluster members. Ensure your network configuration is set up to allow this.