fsck of vCenter Server Appliance 6.0 partitions

I hadn’t realized that we had now begun to use the LVM (Logical Volume Manager) in our vCenter Server Appliance (VCSA) version 6.0. Of course, I found out the hard way after a network outage in our lab brought down our VCSA which was running on NFS. On reboot, the VCSA complained about file system integrity as follows:

vcsa-fsck

What is /dev/mapper/log_vg-log? I’d never seen that before. So I logged into another of my VCSA 6.0 vCenter servers, and took a look at the mounted partitions. Sure enough, it was different to previous versions (5.5) that I’d seen.

# mount
 /dev/sda3 on / type ext3 (rw)
 proc on /proc type proc (rw)
 sysfs on /sys type sysfs (rw)
 udev on /dev type tmpfs (rw,mode=0755)
 tmpfs on /dev/shm type tmpfs (rw,mode=1777)
 devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
 /dev/sda1 on /boot type ext3 (rw,noexec,nosuid,nodev,noacl)
 /dev/mapper/core_vg-core on /storage/core type ext3 (rw)
 /dev/mapper/log_vg-log on /storage/log type ext3 (rw)
 /dev/mapper/db_vg-db on /storage/db type ext3 (rw,noatime,nodiratime)
 /dev/mapper/dblog_vg-dblog on /storage/dblog type ext3 (rw,noatime,nodiratime)
 /dev/mapper/seat_vg-seat on /storage/seat type ext3 (rw,noatime,nodiratime)
 /dev/mapper/netdump_vg-netdump on /storage/netdump type ext3 (rw)
 /dev/mapper/autodeploy_vg-autodeploy on /storage/autodeploy type ext3 (rw)
 /dev/mapper/invsvc_vg-invsvc on /storage/invsvc type ext3 (rw,noatime,nodiratime)

So yes indeed, here were all these “new” VCSA mount points. If you are interested in looking at the LVM in more details, the following CLI commands that can be run from a VCSA shell session will help:

  • pvdisplay -m : displays the mappings between volume groups, logical volumes and physical volumes
  • vgdisplay : display information about volume groups. Each of the logical volumes on the VCSA is in its own volume group
  • lvdisplay : displays information about the logical volumes

These commands are also present in VCSA 5.5 by the way, but they do not display any information as the LVM is not used. These outputs are taken from VCSA 5.5U2 (build 2442329), just by way of demonstration:

vcsa-01a:~ # pvdisplay -m
vcsa-01a:~ #
vcsa-01a:~ # lvdisplay  
No volume groups found
vcsa-01a:~ #
vcsa-01a:~ # vgdisplay  
No volume groups found
vcsa-01a:~#

And just by way of comparison, here is the mount table from the 5.5 version of VCSA:

vcsa-01a:~ # mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/sda1 on /boot type ext3 (rw,noexec,nosuid,nodev,noacl)
/dev/sdb1 on /storage/core type ext3 (rw,nosuid,nodev)
/dev/sdb2 on /storage/log type ext3 (rw,nosuid,nodev)
/dev/sdb3 on /storage/db type ext3 (rw,nosuid,nodev)
vcsa-01a:~ #

Note that it is using disk partitions (/dev/sdX) and not logical volumes.

[Updated] So back to the issue at hand. How did I fix my inconsistency of the filesystem? First off, I had changed the default shell from appliancesh (default) to bash as I was fed up with typing “shell.set –enabled True” everytime I logged in. So very simply, I provided my root password, to drop to a shell prompt, and ran an fsck /dev/mapper/log_vg-log command. Once the inconsistency was fixed, I did a <CNTRL-D> and the VCSA proceeded to boot as normal.

I understand that it may not be as simple for those of you still using the appliancesh shell. It appears that you cannot drop into a shell in maintenance mode (don’t know the reason why) as it complains with Unknown command: ‘shell.set’. Therefore you will have to boot the VCSA with some grub options to use the bash shell as outlined in KB article 2069041. Then you can drop to the shell as I did and run the fsck.

6 comments
  1. IMHO, LVM in a VM is silly.

    I’ll be doing some majors mods to our v6 vcsa (already did so for our 5.5 to separate PG logs/data) if this is not configurable on install.

Comments are closed.