vSphere 5.1 Storage Enhancements – Part 1: VMFS-5
Welcome to the first in a series of posts related to new storage enhancements in vSphere 5.1. The first of these posts will concentrate on VMFS. There are two major enhancements to VMFS-5 in the vSphere 5.1 release.
VMFS File Sharing Limits Increase
Prior to vSphere 5.1, the maximum number of ESXi hosts which could share a read-only file on a VMFS filesystem was 8. This was a limiting factor for those products and features which used linked clones. Linked Clones are simply “read/write” snapshots of a “master or parent” desktop image. In particular, it was a limitation for vCloud Director deployments using linked clones for Fast Provisioning of vApps and VMware View VDI deployments using linked clones for desktops.
In vSphere 5.1, we are increasing this maximum number of hosts that can share a read-only file (or to state this another way, we are increasing the number of concurrent host locks) to 32. This will only apply to hosts running vSphere 5.1 and higher on VMFS-5. Now vCloud Director and VMware View deployments using linked clones can have 32 hosts sharing the same base disk image.
This makes VMFS-5 datastores as scalable as NFS for VDI deployments using VMware View and vCloud Director deployments when using linked clones.
It should be noted that versions of VMware View 5.0 (and earlier) limited the number of hosts which could use linked-clone based desktops to 8. This was true for both VMFS and NFS datastores. VMware View 5.1, released earlier in 2012, increased this host count to 32 for NFS on vSphere 5.0. With the next release of VMware View & vSphere 5.1, you can have 32 hosts sharing the same base disk with both NFS & VMFS datastores.
One final point – this is a driver only enhancement. There are no on-disk changes required on the VMFS-5 volume to benefit from this new feature. Therefore customers who are already on vSphere 5.0 and VMFS-5 need only move to vSphere 5.1. There is no upgrade or change required to their already existing VMFS-5 datastores.
VOMA – vSphere On-disk Metadata Analyzer
VOMA is a new customer facing metadata consistency checker tool, which is run from the CLI of ESXi 5.1 hosts. It checks both the Logical Volume Manager (LVM) and VMFS for issues. It works on both VMFS-3 & VMFS-5 datastores. It runs in a check-only (read-only) mode and will not change any of the metadata. There are a number of very important guidelines around using the tool. For instance, VMFS volumes must not have any running VMs if you want to run VOMA. VOMA will check for this and will report back if there are any local and/or remote running VMs. The VMFS volumes can be mounted or unmounted when you run VOMA, but you should not analyze the VMFS volume if it is in use by other hosts.
If you find yourself in the unfortunately position that you suspect that you may have data corruption on your VMFS volume, prepare to do a restore from backup, or look to engage with a 3rd party data recovery organization if you do not have backups. VMware support will be able to help in diagnosing the severity of any suspected corruption issues, but they are under no obligation to recover your data.
I’m sure you will agree that this is indeed a very nice tool to have at your disposal.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
Hi Cormac !
Certainly its a nice tool.. I tried executing it but didnt get expected output..
~ # voma -f check -d /vmfs/devices/disks/vmhba41\:c0\:T6\:L0:1
-sh: voma: not found
Am i missing something here ? Thanks!
Its in /sbin, which should already be in your $PATH. Which version of ESXi 5.1 are you using? (use vmware -v)
Thanks for the quick response Cormac.. its VMware ESXi 5.0.0 build-469512.
/sbin # voma -f check -d /vmfs/devices/disks/vmhba41\:c0\:T6\:L0:1
-sh: voma: not found
Hi Umesh, this feature is only available on 5.1. It is not available on 5.0.
Do an ‘esxcli storage core device list’ and check for the following line.. Devfs Path: /vmfs/devices/disks/mpx.vmhba2:xx:xx:xx I suspect you need the mpx.vmhba in the path but cant be sure
Ahh yeah, that wont help either!
Yes indeed.VOMA is only available in 5.1.
I missed his comment. Thx 🙂
I’d like to add – if you can’t easily stop all I/O on a datastore you want to check – you can collect a VMFS metadata dump (e.g. 1200 MB, see: http://kb.vmware.com/kb/1020645 for details) and let VOMA run against the dump with ‘-d /path/to/dump.dd’. You would see a number of stale locks in this case (expected), which you can ignore, but a corruption would also be identified this way. The dump has to be taken from the VMFS partition (not from the beginning of the device).
I have my doubts about this 5.1 Pluggable technology. Please look at this. I have a disk where I can see the VMFS partition, but no volume is available, so I cannot mount it. voma shows
voma -f check -d /vmfs/devices/disks/naa.6d4ae5207ae1440018ad90c309c0154f
Module name is missing. Using “vmfs” as default
Checking if device is actively used by other hosts
Running VMFS Checker version 0.9 in check mode
Initializing LVM metadata, Basic Checks will be done
ERROR: Missing LVM Magic. Disk doesn’t have valid LVM Device
ERROR: Failed to Initialize LVM Metadata
VOMA failed to check device : Not a Logical Volume
But in the graphic client, when I try to Add Storage, it shows 8 partitions and one of them is VMFS. How do I get back my virtual machines from inside that particular partition? What happened is that I added a new disk, and reinstalled ESXi 5.1 on the new disk. So now my working data is actually locked up in the old disk and there does not seem to be any way to use it, although it is clearly there.
If use this command
partedUtil get /vmfs/devices/disks/naa.6d4ae5207ae1440018ad90c309c0154f
195469 255 63 3140222976
1 64 8191 0 128
5 8224 520191 0 0
6 520224 1032191 0 0
7 1032224 1257471 0 0
8 1257504 1843199 0 0
2 1843200 10229759 0 0
3 10229760 2854748126 0 0
I see my data in the last line. How do I mount it??
If your VMFS partition is intact, and is not a duplicate of the new VMFS partition which was created when you installed 5.1, a rescan should automatically mount it. Reasons for it not mounting could be due to the ESXi finding a duplicate partition name, or treating the partition as a snapshot. Examine the vmkernel.log file for possible reasons for the partition not mounting. I’d urge you to open a support request as this is not the correct forum for troubleshooting issues of this nature.
ESXi sees the drive, and sees the partitions in it, but cannot turn into a volume, or even a snapshot, so the data is inaccessible. Since the server is not in production, am not going to spend money on support. I am just testing the technology. But this case tells you that the architecture is not as resilient as it claims to be. Ideally, vmkfstools -V should pick it up. When I type this:
ls -al /vmfs/devices/disks
I can see the partition.
I’ve done a “dd” from my VMFS5-Partition with the problem and transfered the 1,5GB Dump-File to an ESXi5.1-Host (ESXSRV24).
When I run VOMA there I’ve got this:
/sbin # voma -m vmfs -f check -d /vmfs/volumes/esxsrv24_Boot-LUN/dump.bin
Checking if device is actively used by other hosts
ERROR: Failed to reserve device. Inappropriate ioctl for device
Aborting VOMA
/sbin #
Make sure you follow the correct steps to capture the dump file as per http://kb.vmware.com/kb/1020645.
I’ve done the capture following KB1020645 !?
Ah! It looks like we decided that VOMA will only run against unmounted VMFS. We will not allow it to be run against a dump file. Not sure why we did that, but that’s the reason for the error.
So what can I do for solving my problem below:
> ~ # ls -al /vmfs/volumes/Store1/VM1/
> ls: /vmfs/volumes/Store1/VM1/VM1.vmsd: No such file or directory
> drwxr-xr-x 1 root root 420 May 14 17:41 .
> drwxr-xr-t 1 root root 57400 May 16 07:51 ..
> ~ #
> ~ # rmdir /vmfs/volumes/Store1/VM1/
> rmdir: ‘/vmfs/volumes/Store1/VM1/’: Directory not empty
> ~ #
> ~ # rm -rf /vmfs/volumes/Store1/VM1/*
> ~ #
> ~ # rmdir /vmfs/volumes/Store1/VM1/
> rmdir: ‘/vmfs/volumes/Store1/VM1/’: Directory not empty
> ~ #
> ~ # ls /vmfs/volumes/Store1/VM1/
> ls: /vmfs/volumes/Store1/VM1/VM1.vmsd: No such file or directory
> ~ #
It’s an VMFS5 (V 5.54) Datastore presented do an ESXi5.0-1024429-Cluster in an vCD-1.51-environment. All ESX-Host was rebooted without solving the problem. Seems that there is an orphand directory-entry pointing to a vmsd-file from a vse-fencing-app which is already deleted.