A first look at vFile – Sharing a persistent volume between containers

Regular readers will have noticed that I have been doing a bit of work recently with docker swarm, and what you need to do to get it to work on VMs running on vSphere. The reason why I had taken such an interest is because I wanted to look at a new product that our Project Hatchway team have been cooking up, namely vFile. In a nutshell, vFile provides simultaneous, persistent volume access between nodes in the same Docker Swarm cluster. In some ways, it can be thought of as an extension to vDVS, the vSphere Docker Volume Service (from the same team) that provides persistent storage for containers. vFile allows these persistent volumes to be shared between containers, even when the container hosts are on completely different ESXi hosts. Let’s take a closer look.

Swarm, Overlay and ETCD networking Requirements

This is probably the area that trips up most people when they get started with vFile (it certainly took me a while). There are a number of networking prerequisites i.e. firewall ports that must be opened. First off all, there is a requirement to open a port to allow the docker swarm nodes to talk. Then there is the communication needed for the docker networking overlay. Please take a look at these posts which talk about which firewall ports need to be opened for docker swarm and the overlay network respectively. Unless these are working correctly, you won’t get far.

Secondly, there is a requirement to use ETCD. ETCD is used by Swarm for cluster coordination and state management. You need to make sure that ETCD ports (2379, 2380) are opened on your swarm manager VM(s). 2379 is for ETCD client requests, and 2380 is for peer communication. You can easily identify this issue in the /var/log/vfile.log – if it is not working, it will contain a line similar to:

2017-12-21 09:56:41.996637765 +0000 UTC [ERROR] Failed to create ETCD client according to manager info Swarm ID=ovn08qobd7tjs3qrr4pwetxa5 IP Addr=....
What you need to be able to see in the log file is ETCD working like this (but you won’t see this until the vFile plugin is deployed):
2017-12-21 11:53:36.941373995 +0000 UTC [INFO] vFile plugin started version="vFile Volume Driver v0.2"
2017-12-21 11:53:36.941477095 +0000 UTC [INFO] Going into ServeUnix - Listening on Unix socket address="/run/docker/plugins/vfile.sock"
2017-12-21 11:53:36.941884271 +0000 UTC [INFO] Started loading file server image
2017-12-21 11:53:36.974808787 +0000 UTC [INFO] getEtcdPorts: clientPort=:2379 peerPort=:2380
2017-12-21 11:53:36.974836665 +0000 UTC [INFO] Swarm node role: worker. Return from NewKvStore nodeID=z6oep8ngrpaqf25oqnsyomjfk

So, to recap, ports that need to be opened are:

  • Swarm – 2377/tcp
  • Swarm overlay network – 7946/tcp, 7946/udp, 4789/udp,
  • ETCD – 2379/tcp, 2380/tcp

 

Deploy vFile docker plugin

vFile, just like vDVS, is also a docker plugin – the installation steps can be found by clicking here. But it is a simple “docker plugin install”.

[root@centos-swarm-w1 ~]# docker plugin install --grant-all-permissions --alias vfile vmware/vfile:latest VFILE_TIMEOUT_IN_SECOND=90
latest: Pulling from vmware/vfile
cb8aba6f1749: Download complete
Digest: sha256:7ab7abc795e60c443583639325011f878e79ce5f085c56a525fc098b02fce343
Status: Downloaded newer image for vmware/vfile:latest
Installed plugin vmware/vfile:latest
[root@centos-swarm-w1 ~]#

This plugin must be deployed on all swarm nodes/VMs.

[root@centos-swarm-master ~]# docker plugin ls 
ID           NAME           DESCRIPTION                         ENABLED 
e84b390e832d vsphere:latest VMWare vSphere Docker Volume plugin true 
fccf36e22ecc vfile:latest   VMWare vFile Docker Volume plugin   true 
[root@centos-swarm-master ~]#

As I mentioned, vDVS (vSphere Docker Volume Service) is also required. There are a number of posts on this site on how to get started with vDVS. Alternatively, you can checkout the official docs here.

Create a shared volume

Now, we can go ahead and create a small test volume using the vFile plugin. After it is created, we can examine it in more detail. Note that there will be two volumes listed after this command completes, one volume for vFile and another for the vSphere Docker Volume Service (but it is the same volume). This is normal. Also note that I am specifying a policy for this volume called R5 (RAID-5). This is a policy I created for vSAN, since this Container Host VM is on vSAN, and so the container volume that I create will also be on vSAN.

[root@centos-swarm-master ~]#  docker volume create --driver=vfile --name=SharedVol -o size=10gb -o vsan-policy-name=R5
SharedVol


[root@centos-swarm-master ~]# docker volume ls
DRIVER VOLUME NAME
vfile:latest SharedVol
vsphere:latest _vF_SharedVol@vsanDatastore


[root@centos-swarm-master ~]# docker volume inspect SharedVol
[
 {
 "CreatedAt": "0001-01-01T00:00:00Z",
 "Driver": "vfile:latest",
 "Labels": {},
 "Mountpoint": "/mnt/vfile/SharedVol/",
 "Name": "SharedVol",
 "Options": {
 "size": "10gb",
 "vsan-policy-name": "R5"
 },
 "Scope": "global",
 "Status": {
 "Clients": null,
 "File server Port": 0,
 "Global Refcount": 0,
 "Service name": "",
 "Volume Status": "Ready"
 }
 }
]
[root@centos-swarm-master ~]#

Excellent. Let’s see now if we can share this volume to containers launched from the other nodes in the cluster.

 

Access it from multiple containers

In this example, we shall use a simple busybox image, and present this volume to it. Let’s begin from a worker node. First, verify that the volume is visible, and then start the busybox with the shared volume mounted (on /mnt/myvol). Once inside the busybox shell, make some files and directories.

[root@centos-swarm-w1 ~]# docker volume ls
DRIVER         VOLUME NAME
vfile:latest   SharedVol
vsphere:latest _vF_SharedVol@vsanDatastore

[root@centos-swarm-w1 ~]# docker run --rm -it -v SharedVol:/mnt/myvol --name busybox-on-worker busybox
/ # cd /mnt/myvol
/mnt/myvol # ls
lost+found
/mnt/myvol # mkdir cormac
/mnt/myvol # cd cormac/
/mnt/myvol/cormac # touch xxx
/mnt/myvol/cormac # touch yyy
/mnt/myvol/cormac # touch zzz
/mnt/myvol/cormac # ls
xxx yyy zzz
/mnt/myvol/cormac #

 

Let’s now verify if we can see the same data by mounting this volume to another container, this time on another node (normally this would be another worker, but in this example I am using my master):

[root@centos-swarm-master ~]# docker run --rm -it -v SharedVol:/mnt/myvol --name busybox-on-master busybox
/ # cd /mnt/myvol/
/mnt/myvol # ls
cormac lost+found
/mnt/myvol # cd cormac
/mnt/myvol/cormac # ls
xxx yyy zzz
/mnt/myvol/cormac #

 

Success! A persistent container volume shared between multiple containers and container hosts without the need for something like NFS, Ceph or Gluster. How cool is that?

Let’s take one final peak at our volume:

[root@centos-swarm-master ~]# docker volume ls
DRIVER         VOLUME NAME
vfile:latest   SharedVol
vsphere:latest _vF_SharedVol@vsanDatastore

[root@centos-swarm-master ~]# docker volume inspect SharedVol
[
 {
 "CreatedAt": "0001-01-01T00:00:00Z",
 "Driver": "vfile:latest",
 "Labels": {},
 "Mountpoint": "/mnt/vfile/SharedVol/",
 "Name": "SharedVol",
 "Options": {
 "size": "10gb",
 "vsan-policy-name": "R5"
 },
 "Scope": "global",
 "Status": {
 "Clients": [
 "10.27.51.146",
 "10.27.51.147"
 ],
 "File server Port": 30000,
 "Global Refcount": 2,
 "Service name": "vFileServerSharedVol",
 "Volume Status": "Mounted"
 }
 }
]

The status has changed from Ready to Mounted. Looks good.

If you plan to do some testing with vFile, the hatchway team would love to get your feedback. They can be easily contacted on github by clicking here.