Docker Volume Driver for vSphere

dockerThis is a really cool development. There is now a docker volume driver for vSphere which we just made public last night, and is now available for tech preview. This will allow customers to address persistent storage requirements for Docker containers in vSphere environments. Basically, it allows you to create a VMDK, and use this VMDK as a persistent storage volume for containers. In the following posts, I will outline the steps involved in getting started with Docker Volume Driver for vSphere. In essence, there are 4 steps:

  1. Install the docker volume plugin on ESXi host. I was running ESXi 6.0U2.
  2. Deploy Photon OS VM (although you can also use Ubuntu)
  3. Install the docker VMDK plugin on VM
  4. Create docker volume and run container to consume it

All the pieces to get you started are available on github here.

1. Install Docker Volume plugin on ESXi host

You will need to include the –no-sig-check option to the vib install to bypasses acceptance level verification, including signing.

[root@esxi-hp-08:~] esxcli software vib install \
-d "/vmfs/volumes/569c904a-8880cc78-a5c7-a0369f56ddc0/\
vmdkops/vmware-esx-vmdkops-0.1.0.tp.zip" --no-sig-check -f
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: VMWare_bootbank_esx-vmdkops-service_1.0.0-0.0.1
   VIBs Removed:
   VIBs Skipped:
[root@esxi-hp-08:~]

No reboot is required once installed. Check the status of the service as follows:

[root@esxi-hp-08:~] /etc/init.d/vmdk-opsd status
vmdkops-opsd is running pid=12343527
[root@esxi-hp-08:~]

Logs are available in /var/log/vmware/vmdk_ops.log:

05/27/16 14:18:42 12343527 [INFO   ] Log configuration generated \
- '/etc/vmware/vmdkops/log_config.json'.
05/27/16 14:18:42 12343527 [INFO   ] === Starting vmdkops service ===
.
.
.
05/30/16 09:06:20 12343527 [INFO ] Created /vmfs/volumes/562f4eef-04977492\
-d303-a0369f56ddc0/dockvols
05/30/16 09:15:57 12343527 [INFO ] *** createVMDK: /vmfs/volumes/562f4eef\
-04977492-d303-a0369f56ddc0/dockvols/MyVolume.vmdk opts = {u'size': u'10gb'}

2. Deploy Photon OS

I simply rolled out the latest OVA 1.0 version of Photon OS (which is also our release candidate – RC). You can get the details on github here. As mentioned, there are also instructions to install on Ubuntu if you wish to use that flavor instead of Photon OS.

3. Install docker VMDK plugin on Photon OS

For Photon OS, there is an RPM that we must install. There is one requirement however. The version of docker must be greater than 1.9. The last version of Photon OS RC has version 1.11 installed.

root@photon-machine [ ~ ]# docker version
Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 19:36:04 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 19:36:04 2016
 OS/Arch:      linux/amd64
root@photon-machine [ ~ ]# 

Install the RPM (I’ve used “-U” out of habit, but “-i” can also be used):

root@photon-machine [ ~ ]# ls
docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
root@photon-machine [ ~ ]# rpm -Uvh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:docker-volume-vsphere-0:0.1.0.tp-################################# [100%]
  File: '/proc/1/exe' -> '/usr/lib/systemd/systemd'
Created symlink from /etc/systemd/system/multi-user.target.wants/\
docker-volume-vsphere.service to /usr/lib/systemd/system/docker-volume-vsphere.service.
Check the status of the plugin:
root@photon-machine [ ~ ]# systemctl status docker-volume-vsphere
* docker-volume-vsphere.service - "Docker Volume Driver for vSphere"
   Loaded: loaded (/usr/lib/systemd/system/docker-volume-vsphere.service;\
 enabled; vendor preset: enabled)
   Active: active (running) since Mon 2016-05-30 09:04:21 UTC; 28s ago
 Main PID: 256 (docker-volume-v)
   CGroup: /system.slice/docker-volume-vsphere.service
           `-256 /usr/local/bin/docker-volume-vsphere

May 30 09:04:21 photon-machine systemd[1]: Started "Docker Volume Driver\
 for....
Hint: Some lines were ellipsized, use -l to show in full.
root@photon-machine [ ~ ]#

Logs can be found in /var/log/docker-volume-vsphere.log on Photon OS:

root@photon-machine [ /var/log ]# tail -f docker-volume-vsphere.log
2016-05-30 09:06:20.130905778 +0000 UTC [DEBUG] vmdkOps.List
2016-05-30 09:15:56.782560555 +0000 UTC [DEBUG] vmdkOps.Get name=MyVolume
2016-05-30 09:15:56.78264008 +0000 UTC [DEBUG] vmdkOps.List
2016-05-30 09:15:56.78533104 +0000 UTC [DEBUG] vmdkOp.Create name=MyVolume
2016-05-30 09:15:57.459536256 +0000 UTC [INFO] Volume created name=MyVolume

OK, that’s all the constituent parts installed on both the hypervizor and VM. Let’s now see it in action.

4. Create volume for use by containers

In this example, I will create a 20GB VMDK called MyVolume, and use it as a persistent volume for a container running Ubuntu first, and then a container running Debian. First I must create the volume.

root@photon-machine [ ~ ]# docker volume create --driver=vmdk \
--name=MyVolume -o size=20gb
MyVolume
root@photon-machine [ ~ ]# docker volume ls
DRIVER              VOLUME NAME
vmdk                MyVolume
root@photon-machine [ ~ ]#
root@photon-machine [ ~ ]# docker volume inspect MyVolume
[
    {
        "Name": "MyVolume",
        "Driver": "vmdk",
        "Mountpoint": "/mnt/vmdk/MyVolume",
        "Labels": {}
    }
]
root@photon-machine [ ~ ]#

In this next step, I will run a container with an Ubuntu image, and pass the volume MyVolume to it for persistent storage. It should appear in the path /MyVolume in the container. Note: Do not provide the full mountpoint, just the name is required:

root@photon-machine [ ~ ]# docker run -it -v MyVolume:/MyVolume ubuntu bash
root@bd9410fb4c1d:/# ls
Myvolume  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  \
root  run  sbin  srv  sys  tmp  usr  var
root@fe8c21d003fa:/# df
Filesystem                                      1K-blocks    Used Available Use% Mounted on
overlay                                           8122788 6095776   1591356  80% /
tmpfs                                             4085412       0   4085412   0% /dev
tmpfs                                             4085412       0   4085412   0% /sys/fs/cgroup
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:0:0  20642428   44992  19548860   1% /MyVolume
/dev/root                                         8122788 6095776   1591356  80% /etc/hosts
shm                                                 65536       0     65536   0% /dev/shm
root@fe8c21d003fa:/#
root@bd9410fb4c1d:/# cd Myvolume/
root@bd9410fb4c1d:/Myvolume# ls
root@bd9410fb4c1d:/Myvolume#

At the moment the volume is empty. Let’s put some content in it.

root@bd9410fb4c1d:/Myvolume# mkdir Cormac
root@bd9410fb4c1d:/Myvolume# cd Cormac
root@bd9410fb4c1d:/Myvolume/Cormac# echo "Very important file that must \
persist" >> important
root@bd9410fb4c1d:/Myvolume/Cormac# ls
important
root@bd9410fb4c1d:/Myvolume/Cormac# cat important
Very important file that must persist
root@bd9410fb4c1d:/Myvolume/Cormac# exit
exit
root@photon-machine [ ~ ]#

We’ve added a little bit of content now, but we have also stopped the container. Just to show that the content persists, lets run another container and pass it the same volume, this time with a Debian image, and see if we can access the previously created content.

root@photon-machine [ ~ ]# docker run -it -v MyVolume:/MyVolume debian bash
Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:978927d00fdd51a21dab7148aa8bbc704a69b518fa6a12aa8f45be3f03495860
Status: Downloaded newer image for debian:latest
root@348fd34de643:/# ls
Myvolume  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  \
root  run  sbin  srv  sys  tmp  usr  var
root@348fd34de643:/# df
Filesystem                                      1K-blocks    Used Available Use% Mounted on
overlay                                           8122788 6095920   1591212  80% /
tmpfs                                             4085412       0   4085412   0% /dev
tmpfs                                             4085412       0   4085412   0% /sys/fs/cgroup
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:0:0  20642428   44992  19548860   1% /MyVolume
/dev/root                                         8122788 6095920   1591212  80% /etc/hosts
shm                                                 65536       0     65536   0% /dev/shm
root@348fd34de643:/#
root@348fd34de643:/# cd Myvolume/ 
root@348fd34de643:/Myvolume# ls Cormac 
root@348fd34de643:/Myvolume# cd Cormac/ 
root@348fd34de643:/Myvolume/Cormac# ls 
important 
root@348fd34de643:/Myvolume/Cormac# cat important 
Very important file that must persist 
root@348fd34de643:/Myvolume/Cormac#

And there we have it. Containers using VMDKs on vSphere datastores to persist data. If you wanted to see what is created on the actual datastore at the ESXi host level, you can navigate to it as normal from the ESXi host. They are stored in a folder called dockvols:

[root@esxi-hp-08:/vmfs/volumes/562f4eef-04977492-d303-a0369f56ddc0/dockvols] ls -l
total 506880
-rw-------    1 root     root           173 Jun  3 06:50 MyVolume-6c182e87765678fb.vmfd
-rw-------    1 root     root     21474836480 Jun  3 06:51 MyVolume-flat.vmdk
-rw-------    1 root     root           564 Jun  3 06:50 MyVolume.vmdk
[root@esxi-hp-08:/vmfs/volumes/562f4eef-04977492-d303-a0369f56ddc0/dockvols]

Although this example only shows a local VMFS datastore being used for persistent storage, the docker volume driver for vSphere also works with shared VMFS, NFS and VSAN. I’ll try to revisit the VSAN implementation in a future post to see how it works with policies, etc.

6 Replies to “Docker Volume Driver for vSphere”

  1. From a vSphere point of view where are the permissions coming from? If I install the vib on a host is any VM Admin able to add the drivers to their VMs and start creating VMDKs at will?

    1. Right now that is the case Josh, but the driver is still in beta. There is still work to be done in the area of multi-tenancy, etc. So now would be a good time to provide any input you have on how this should work best you think this should work?

      1. Leverage vCenter. You guys already have a role based access solution available, leverage that stack. Scope access to particular datastore or DSCs allowing to create clusters of container hosts that only have access to particular storage. Don’t forget about the virtualization admins while trying to please the container crazed developers.

Comments are closed.