Project Hatchway – VMware Persistent Storage for Containers

Earlier yesterday, I had the opportunity to sit in on a VMworld 2017 session delivered by one of my colleagues, Tushar Thole. Tushar presented “Project Hatchway” to the audience, and like the description of this post suggests, this is all about providing VMware persistent storage to containers. In a nutshell, volumes can now be created on VMFS, NFS and on vSAN in the form of VMDKs, and these volumes can now be consumed by containers instantiated within a container host, i.e. a virtual machine. But there have been some interesting new enhancements which Tushar shared with us in the session.

Tushar began by sharing an interesting nugget with the audience. There are a lot of cloud native apps which require state, for example mySQL, MongoDB, Redis, etc. Tushar then showed us the results of a survey to see which of the most common cloud native apps actually did have a requirement on persistent storage. The result was that 7 of the top 10 apps had a requirement:

So obviously there is a need for persistent, and this is where ‘Project Hatchway” comes in. There were 4 key parts to this presentation.

  • vSphere Docker Volume Service (vDVS) – enabling persistent storage for docker containers, including Swarm
  • vSphere Cloud Provider (VCP) – enabling persistent storage for containers orchestrated by Kubernetes
  • vDVS support for stateful storage in Windows container hosts running on ESXi
  • vFile – shared file storage for containers on top of VMware storage

I’m not going to say too much more about vDVS; I’ve talked about this multiple times already on this blog and you can find some of the links here. vDVS is made up of 2 components – one is installed on the ESXi host and the other is installed in the container host/VM. This then allows the container host (VM) to request that a volume is created when a docker volume request is made.

As well as being certified by Docker, vDVS now also supports persistent storage for Windows container hosts and not just Linux container hosts, which is something worth highlighting, as I do not think there are many products that can do that currently.

The other new announcement was vFile. vFile is an experimental feature, but also very interesting as this now allows us to share volumes between multiple containers. It comes in the form of a docker plugin and requires zero configuration. You simply specify that a docker volume is of type ‘vFile’ when you instantiate it, and this makes it automatically sharable.

I’ve also talked about the vSphere Cloud Provider (VCP) on this blog before now, as we have used this for other Kubernetes initiatives in the past, such as kubernetes-anywhere. One other thing to point out is that the VCP is also the component that provides persistent storage for PKS, the recently announced Pivotal Container Service. There are no components needed for VCP on the ESXi host; all the components are already built into Kubernetes. When we create a service on Kubernetes, we specify the storage class to pick the type of storage we want for the container/service. Kubernetes now talks to VC to make these requests. As a summary slide, I think the following screenshots shows very well what vDVS and VCP can do for persistent storage for containers on vSphere.

And the point to take away is that you will be using standard docker commands and standard kubectl commands to consume this. We do have one important additional feature however, and that is to say that both of these features (vDVS and VCP) can leverage VMware’s Storage Policy Based Management framework. So let’s say that your underlying storage was provided by vSAN. When creating a docker volume, or when specifying a Kubernetes storage class, one can also specify a particular policy for the container volume that you are instantiating. So you could include additional items like stripe width, failures to tolerate, and all of those other policy settings that you can associate with vSAN storage via SPBM. Very nice indeed.

Here are some additional links where you can find more information.