VSAN Part 16 – Reclaiming disks for other uses
A number of customers have raised this question. How do you reclaim disks which were once used by VSAN but you now wish to use these disks for other purposes? Well, first off, if you are using some of the later builds of VSAN and you place the host into maintenance mode and remove the disk group from the host, this will automatically remove the partitions from the disks and you are good to go with reusing these disks for some other purpose. However, if you do something such as reinstall ESXi on the host but do not go through the appropriate VSAN clean up steps first, then there may still be VSAN partition information on the disks. So how do you go about cleaning up these disks?
There are two method available – one is a nice controlled method, the other not so much.
First of all, esxcli contains a nice command tor emove physical disks from VSAN disk groups. This wasn’t in early beta versions of VSAN so you may not be aware of it.
Usage: esxcli vsan storage remove [cmd options]
The command options are -d (for magnetic disks), -s (for SSDs) and -u for UUID of VSAN Disk. Note that the -s option for SSD will also remove the magnetic disks from the disk group too.
What is the other option? Well, before we had this nice esxcli command, we had to use a partedUtil delete command to clean up the disks. But if VSAN is still configured on the ESXi hosts and has claimed those disks, you will get an error similar to the following when you run partedUtil on the disks:
Error: Read-only file system during write on /dev/disks/naa.500xxxxxxxx WriteNewPtable: Unable to commit to device /dev/disks/naa.500xxxxxxxx
At this point, you can try the following steps. The first one is to get this host to leave the VSAN cluster manually. You can do this will the following esxcli command on the ESXi host in question:
esxcli vsan cluster leave
Once the command has returned, you can then begin to unload the VSAN VMkernel modules. There are three in total, vsan, plog and lsomcommon. You would do this as follows:
vmkload_mod -u vsan vmkload_mod -u plog vmkload_mod -u lsomcommon
Unloading VMkernel modules is not something we would normally recommend. Ideally, if you need to do this step, the ESXi host is decommissioned or any workloads have been moved elsewhere, just in case. Since publishing this article, I’ve been asked to call out this method of reclaiming disks as unsupported – so you’ve been warned!
Next use partedUtil to display the current partitions and delete them from the disk drive:
~ # partedUtil get /dev/disks/naa.500xxxxxx 15566 255 63 250069680 1 2048 6143 0 0 2 6144 250069646 0 0 ~ # partedUtil getptbl /dev/disks/naa.500xxxxxx gpt 15566 255 63 250069680 1 2048 6143 381CFCCC728811E092EE000C2911D0B2 vsan 0 2 6144 250069646 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 ~ # ~ # partedUtil delete /dev/disks/naa.500xxxxxx 1 ~ # partedUtil delete /dev/disks/naa.500xxxxxx 2
There are two partitions, 1 & 2, to delete. Once the partitions have been deleted, reload the VMkernel modules using the vmkload_mod command:
vmkload_mod lsomcommon vmkload_mod plog vmkload_mod vsan
At this point, the disk can now be reused for other purposes such as VMFS or Raw Device Mapping (RDM), or in the case of an SSD, it can now be used for vSphere Flash Read Cache (vFRC).
The second method is a little more involved, and hopefully you won’t have to use it. However I have included it here as it helped us reclaim previously used disks from VSAN in the past. The advice is to use the esxcli command, but if that doesn’t work for you, check out the alternate method.
Hi Comac, just wanted to say great post. The unsupported option 2 worked for me but a little variation in terms of order. I did a unloading of vsan, plog and lsomcommon. I think vsan is a dependency and i was prompted “module symbol in use”. The rest works just great! Thanks!
Thanks Vicardo – I may have got the modules in the wrong order. I was trying to recall from memory. I’ll double check and change the order if it is wrong.