Site icon CormacHogan.com

Docker Volume Driver for vSphere on Virtual SAN

dockerI took another opportunity this week to look at our new Docker Volume Driver for vSphere which is currently in tech preview. This time I wanted to see how it behaved on Virtual SAN (VSAN). What I wanted to do is query the layout of the VMDK storage object on VSAN, and how an administrator can query its layout from vCenter server, but also from RVC, the Ruby vSphere Console. There might be a situation where you need to to query this information.

My colleague, Wiliam Lam, has already added some information about how you can deploy volumes with different policies on VSAN in his excellent blog post here. As I said, I wanted to expand on this now, and see what is happening under the covers on VSAN.

Lets begin by outlining the 3 different components involved, and I will be jumping to each in turn:

  1. VM running docker – requires an RPM installed, and this is where we will create our volumes and containers. I used Photon OS for this example
  2. ESXi host – requires to have a VIB installed, and contains tooling such as vmdkops_admin.py
  3. vCenter server running RVC – you could of course have RVC installed somewhere else, but since it is installed on vCenter server, I will use that. We will also show the config from a vSphere client perspective.

Let’s begin by creating our first volume. On the VM running docker, and with the docker volume driver for vSphere RPM installed, we run the following:

# docker volume create --driver=vmdk --name=VSANvol -o size=10gb
VSANvol
# docker volume ls
DRIVER              VOLUME NAME
vmdk                VSANvol
# docker volume inspect VSANvol
[
    {
        "Name": "VSANvol",
        "Driver": "vmdk",
        "Mountpoint": "/mnt/vmdk/VSANvol",
        "Labels": {}
    }
]
#

The volume is now created. Let’s see what we can see from an ESXi host perspective. We are running this command from the ESXi host where the docker VM is running:

[root@esxi-hp-05:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls \
-c volume,datastore,created-by,policy,attached-to,capacity,used
Volume   Datastore      Created By  Attached To  Policy          Capacity  Used
-------  -------------  ----------  -----------  --------------  --------  --------
VSANvol  vsanDatastore  Photon-2    detached     [VSAN default]  10.00GB   728.00MB

[root@esxi-hp-05:~]

We can see that the volume is not yet attached to any containers. But how does this appear on the VSAN datastore? We now have a “floating” VMDK object (for want of a better description). The only thing we know if that it is using the VSAN default policy which sets the number of failures to tolerate = 1 (RAID-1) configuration, less that 1 GB of space is used, and it is not attached to any VM. There are two RVC commands that we can use to look at this object. The first is the vsan.disks_stats and the second is vsan.disk_object_info. The first will supply the disk identifier, and the second will show us any parts of an object that are on this disk (slide right to see the full output):

> vsan.disks_stats 10.27.51.7
+--------------------------------------+------------+-------+------+-----------+--------+----------+---------+
|                                      |            |       | Num  | Capacity  |        |          | Status  |
| DisplayName                          | Host       | isSSD | Comp | Total     | Used   | Reserved | Health  |
+--------------------------------------+------------+-------+------+-----------+--------+----------+---------+
| naa.600508b1001cc5956fa4ceab9c0f3840 | 10.27.51.7 | SSD   | 0    | 186.27 GB | 0.00 % | 0.00 %   | OK (v3) |
| naa.600508b1001c357b9abfce4730e1b697 | 10.27.51.7 | MD    | 6    | 737.72 GB | 3.55 % | 1.21 %   | OK (v3) |
+--------------------------------------+------------+-------+------+-----------+--------+----------+---------+

Now that I have the NAA id, I can display all of the objects/components on the disk:

> vsan.disk_object_info 10.27.51.7 naa.600508b1001c357b9abfce4730e1b697
Physical disk naa.600508b1001c357b9abfce4730e1b697 \
(522e5e4c-a45a-f724-ae04-73ac004b835d):
  DOM Object: 6b355957-229e-dd6a-78e7-a0369f30c548 (v3, owner: unknown, \
policy: hostFailuresToTolerate = 1)
    Context: Can't attribute object to any VM, may be swap?
    RAID_1
      Component: 6b355957-81e5-2a6b-8e93-a0369f30c548 (state: ACTIVE (5), \
      host: 10.27.51.7, \
      md: **naa.600508b1001c357b9abfce4730e1b697**, \
      ssd: naa.600508b1001cc5956fa4ceab9c0f3840,
                                                       votes: 1, usage: 0.4 GB)
      Component: 6b355957-f689-2c6b-4666-a0369f30c548 (state: ACTIVE (5), \
      host: 569ca570-06ce-1870-391b-a0369f56ddbc, \
      md: 520ace99-40b3-b0f5-fc46-d96a69093bfe, \
      ssd: 52c57e31-e5ff-8711-eda4-c8ffaf0e5b52,
                                                       votes: 1, usage: 0.4 GB)
    Witness: 6b355957-1cdc-2d6b-9a88-a0369f30c548 (state: ACTIVE (5), \
      host: 569c86cd-19dd-0723-8962-a0369f30c548, \
      md: 52082392-841d-5290-c713-420e610ba3d6, \
      ssd: 525101f5-821b-d156-06fe-9ae4fcfe630f,
                                                   votes: 1, usage: 0.0 GB)

Now I have truncated the output here as there are many more objects and components displayed. However this is our “unattached” VMDK because (a) it is not associated with a VM, (b) its policy is simplified to only have hostFailuresToTolerate =1 and nothing else, (c) it is a RAID-1 and (d) the size more or less matches what was displayed on the ESXi host.

Note that if I try to look at this in the vSphere web client, it is not visible. The only VMDKs that I see are those that are attached to VMs, such as my Photon OS VM running docker:

Ok – lets go an attach this docker volume to a container, consume some space and see how that shakes things up.

root@photon-machine [ ~ ]# docker run -it -v VSANvol:/VSANvol ubuntu bash
root@50713467a80f:/# df
Filesystem                                      1K-blocks    Used Available Use% Mounted on
overlay                                           8122788 3642804   4044328  48% /
tmpfs                                             4085412       0   4085412   0% /dev
tmpfs                                             4085412       0   4085412   0% /sys/fs/cgroup
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:0:0  10321208   23028   9773892   1% /VSANvol
/dev/root                                         8122788 3642804   4044328  48% /etc/hosts
shm                                                 65536       0     65536   0% /dev/shm

root@50713467a80f:/# cd /VSANvol

root@50713467a80f:/VSANvol#  dd if=/dev/zero of=tmpfile count=1024 bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.29176 s, 250 MB/s

root@50713467a80f:/VSANvol# df
Filesystem                                      1K-blocks    Used Available Use% Mounted on
overlay                                           8122788 3642804   4044328  48% /
tmpfs                                             4085412       0   4085412   0% /dev
tmpfs                                             4085412       0   4085412   0% /sys/fs/cgroup
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:0:0  10321208 1072632   8724288  11% /VSANvol
/dev/root                                         8122788 3642804   4044328  48% /etc/hosts
shm                                                 65536       0     65536   0% /dev/shm

Let’s now check how things look from an ESXi host perspective (slide right to see the full output):

[root@esxi-hp-05:~] /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py \
ls -c volume,datastore,created-by,policy,attached-to,capacity,used
Volume   Datastore      Created By  Attached To                           Policy          Capacity  Used
-------  -------------  ----------  ------------------------------------  --------------  --------  ------
VSANvol  vsanDatastore  Photon-2    4206ee45-c558-6995-0767-0e747a21a8fa  [VSAN default]  10.00GB   2.00GB

[root@esxi-hp-05:~]

We can see that the volume is now “attached” and that the used space has increased. Let’s now take a look at that object that we looked at previously in RVC:

> vsan.disk_object_info 10.27.51.7 naa.600508b1001c357b9abfce4730e1b697
Physical disk naa.600508b1001c357b9abfce4730e1b697 \
(522e5e4c-a45a-f724-ae04-73ac004b835d):
  DOM Object: 6b355957-229e-dd6a-78e7-a0369f30c548 (v3, owner: unknown, \
policy: hostFailuresToTolerate = 1)
    Context: Part of VM Photon-2: Disk: \
    [vsanDatastore] 31235857-1058-b764-b51c-a0369f30c548/VSANvol.vmdk
    RAID_1
      Component: 6b355957-81e5-2a6b-8e93-a0369f30c548 (state: ACTIVE (5), \
      host: 10.27.51.7, \
      md: **naa.600508b1001c357b9abfce4730e1b697**, \
      ssd: naa.600508b1001cc5956fa4ceab9c0f3840,
                                                       votes: 1, usage: 1.4 GB)
      Component: 6b355957-f689-2c6b-4666-a0369f30c548 (state: ACTIVE (5), \
      host: 569ca570-06ce-1870-391b-a0369f56ddbc, \
      md: 520ace99-40b3-b0f5-fc46-d96a69093bfe, \
      ssd: 52c57e31-e5ff-8711-eda4-c8ffaf0e5b52,
                                                       votes: 1, usage: 1.4 GB)
    Witness: 6b355957-1cdc-2d6b-9a88-a0369f30c548 (state: ACTIVE (5), \
      host: 569c86cd-19dd-0723-8962-a0369f30c548, \
      md: 52082392-841d-5290-c713-420e610ba3d6, \
      ssd: 525101f5-821b-d156-06fe-9ae4fcfe630f,
                                                   votes: 1, usage: 0.0 GB)

Again, I have truncated the output as before. You can see it is the same DOM object we looked at earlier, but this time it has the context (it is part of the VM Photon-2), and that we also have the name which is VSANvol.vmdk. The size has also increased by 1GB as a result of creating the 1GB tmpfile that we created on the volume in the container.

And if we check the vSphere web client, we can see that the VMDK now becomes visible (the container to which the volume is attached is on the VM called Photon-2). Check out Hard disk 3 below:

If you wish to try out additional policies using /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create, William has the details in his blog post that I highlighted earlier. Hopefully this gives a better understanding on how the docker volume driver for vSphere works with VSAN.

Exit mobile version