My DockerCon 2017 Day #1

This is my very first DockerCon. It is also the first time that I’ve attended a conference purely as an attendee, and not have some responsibilities around breakout sessions, or customer meetings. Obviously I have an interest in much of the infrastructure side of things, so that is where I focused. This post is just some random musings about my first day at DockerCon17, and some things that I found interesting. I hope you do too.

 

Keynote

First up was Ben Golub, CEO of Docker. This was a sort of “state of the nation” address, where we got some updates on where things were with Docker on its 4th birthday. Currently there are 3300 contributors to the docker open source project (of which 42% are independent). There are also now 900K docker apps on docker hub, and finally we were told that Docker currently has 320 employees, of which 150 are engineers.

Next up was Solomon Hykes, Founder of Docker. Solomon was there to make some new announcements. First was multi-stage build, which enables you to reduce the size of your containers by separating the build environment from the run-time. Another feature was the ability to move an app from your local desktop to the cloud (in fact, I think was called “desktop to cloud“) with just a few clicks. But I guess one of the major announcements was LinuxKit which is a Linux subsystem which can run on any OS, and is aimed purely at running containers. To prove the point, one of the weekend project demonstrations showed how Kubernetes could be deployed on a MAC using LinuxKit.

There were some other announcements as well of course (I’m condensing Solomon’s 90 minute keynote in to a few short sentences here). However the other one that stood out was when John Gossman of Microsoft took to the stage and demonstrated how you can now run Linux containers on Windows with Hyper-V Isolation. Previously you could only run Windows containers, but with this new Hyper-V Isolation for Linux, you can run Linux containers now as well.

 

What’s new in Docker

This session was presented by Victor Vieux. Victor started with the new versioning convention. As of docker 1.13, the versioning changed to a new format, e.g. 17.03-ce where 1703 is in YYMM date format and “ce” means community edition. If you see “ee”, this means enterprise edition. There are in fact 3 editions now:

  • => edge is bleeding edge
  • => docker ce is a quarterly release, supported for 4 months
  • => docker ee is a quarterly release, supported for 12 months

 

Victor then delved into the new multi-build method announced in the keynote. One thing that was interesting were some of new commands for managing capacity/space. There is a new “docker system df” command to see how much disk space is being used, and if you need to clean up disk space, there is a new  “docker system prune” command which will clean up images and volumes that are no longer being used.

Another nice feature that I saw was topology or rack awareness for container applications. When you start your docker engine/daemon, you can associate a label with it. Then when you launch your service, use the follow option to specify rack awareness: “docker service createplacement-pref-add=rack“. This will ensure that containers are placed in different locations to avoid a single failure taking down the whole application.

The final piece that I thought was interesting was an update to the logging mechanism. You can now get logs at the service level using “docker service logs” which will display which node the log is coming from, while combining all the logs from the containers in that service into a single output.

 

Portworx

I had a brief chat with these guys at their booth, and I also found out that they presented at the Tech Field Day event that was being run here at DockerCon at the same time. Portworx are a storage start-up (since 2015) based out Los Altos, in CA, and they are focused on providing a solution to stateful containers. I went along to the their (very short) 20 minute breakout session. There, Goutham (Gou) Rao – Co-Founder and CTO of Portworx, gave us a very brief overview of the technology. In a nutshell, Portworx acts like a virtualized storage layer for containers. The hosts/clusters that are running your container applications are scanned at startup, and Portworx has the smarts to figure out things like zones and regions, as well as the characteristics of the available storage. It “fingerprints” the servers and places the storage into high:medium:low buckets based on the characteristics. The containers then consume the storage based on “storage class”, and this is then used for placement decisions on where to create the volume (e.g. local disk, SAN Lun, Cloud Storage). It also ensures that for availability, no two copies of the data are placed in the same location (if I remember correctly, 3 copies are created). And once more, if I understood correctly, the container is moved to where it will have data locality to the storage for best performance. Portworx also allows different policies for different applications (e.g. Cassandra can have one policy, PostgresSQL can have another). Resilience is achieved by maintaining multiple copies of the data, and performance is improved by having the data acknowledged when a quorum of devices acknowledge it. It can then be deemed acknowledged, while the remaining devices commit the block.

I’ll admit that I have a lot of questions still to ask about this solution. I guess the TFD recording is the best place to start. You can also learn more here.

 

Infinit

This is a company that Docker recently acquired. I went along to this session, thinking that there might be some example of where one might use Infinit for container volumes, but this session was purely aimed at discussing how Infinit could be used as a key-value store. The session was still interesting, and Julien Quintard (former CEO of Infinit) and Quentin Hocquet gave good overviews on how Infinit could be used as a superior KV-store. They highlighted a bunch of benefits of using Infinit KV-store over those found in etcd, Zookeeper, Consul and even Raft for maintaining consensus. The issue, as they see it, is that most of these KV stores work off of the master/worker scenario whereas Infinit work off of each node being equal, doing some master operations while still storing blocks of data. They also have this concept of block quorums where multiple nodes are trying to write to the same block, and where only the nodes that are in that group need to reach consensus about who succeeds, and who has to retry. This is unlike other models, where all managers need to reach consensus. They claim that this approach gives better security, better scalability and better performance. They also have this concept of mutable and immutable blocks. Mutable is obviously a block that is in a state of change, and can be written to (so require consensus when there is multiple nodes writing) but immutable blocks don’t need to worry about that; there is only ever one version and it is never updated. You can also keep this latter block type in cache forever since it never changes. The guys then went on to show us a demo of this in action. The guys said that the plan is to open source this in the next one or two months. My understanding is that this is not a direct replacement for the likes of  etcd, Zookeeper, Consul or Raft. I believe it works at a lower level, but possibly someone could take this open source and build another service to maintain cluster integrity and synchronization.

Unfortunately there was no discussion regarding using the Infinit storage platform to do container volumes, etc, although in fairness, they guys discussed what was in the session title. You can learn more here.

That’s it for my first day at DockerCon17. Tomorrow I’m hoping to spend some more time in the exhibition center, and see what else is happening in the container ecosystem.

Comments are closed.