Site icon CormacHogan.com

My DockerCon 2017 Day #2

This is day #2 of DockerCon 2017. If you want to read my impressions of DockerCon 2017 Day #1, you can find it here. Today, as well as attending the keynote, some breakout sessions and visiting the expo, I wanted to highlight a couple of VMware announcements that were made in this space yesterday. First of all, we announce the release of vSphere Integrated Containers v1.1. The big-ticket item in VIC 1.1 is that the key items of VIC Engine are now merged into a single OVA appliance for ease of deployment. As well as that, we also released Photon Platform v1.2. The big-ticket item here, in my opinion, is support for Kubernetes v1.6. But there are a bunch of additional improvements other than those outlined here, so check out the links for more details. OK, back to DockerCon 2017 day #2.

Keynote

Let’s start with the keynote. I think I can sum this up by simply stating that this keynote was all about telling the audience that Docker is now ready for the enterprise. Ben Golub, Docker CEO, led this keynote and invited both Visa and MetLife up on stage to tell us about their docker journey and use cases. Pretty run-of-the-mill stuff, and pretty high level to be honest. The rest of the keynote was all about emphasizing Docker Enterprise Edition. We got demos on using docker in a secure supply chain, where the containers are inspected before being pushed to production. Docker scans the image layer by layer, and only if it passes the introspect is the image allowed to go live in production. One of the highlights of the demo was having a hybrid Windows/Linux application running on a hybrid Windows/Linux cluster. Judging by the crowd reaction, I’m guessing that this is something of a big deal in the container space. Ben went on to highlight that they now have a large collection of third-party software that has been certified for use in Docker Enterprise. This led onto the next part of the presentation where Oracle were invited on stage to announce the availability of Oracle software in the docker store. The Oracle guy announced that this is free for test&dev, but that customers would need to call Oracle if you want to go into production or want support. No detail on licensing was provided (no surprise there). A lot of noise on twitter about this, especially considering Oracle’s hard stance on running their software in VMs. The final demo of the session was migrating a “legacy” application to docker. This “legacy” application was two VMs, one running the online store (web server and LAMP stack I believe), and another VM with an Oracle database for the back-end.  They used image2docker, an open source tool, to convert the front-end VM to a container (and create a docker file). For the Oracle part, they just pulled a new Oracle container from the store. That was the demo. No detail on how to get the data from the “legacy” database into the container and nothing about how to persist the data, which could have been interesting. Oh! And nothing about how to license the Oracle instance either. Let’s move on.

[Update]: The keynote finished with an announcement around the Moby Project. I didn’t quite get what this was about initially, but after reading up on it after the keynote, and speaking to people more knowledgeable on the topic than I am, it seems that Docker are now separating Docker the Company/Product/Brand from Docker the Project. So from here on out, the Company/Product/Brand will continue to be known as Docker, and the Docker Project will be henceforth known as the Moby Project. The Moby project is for people who want to customize their own builds, or build their own container system or people who want to deep-dive into docker internals. Docker the product is recommended for people who want to put docker into production and get support. I’m guessing this is Docker (the company) figuring out how to start making revenue.

 Splunk

My first break-out was to go and see how Splunk worked with containers, and they had a customer from Germany (bild.de) co-present with them.  One thing of note is that Splunk can be called directly from the docker command, e.g. docker run –log-driver=splunk. Bild, the customer, gets about 20 million users per month, and they run absolutely everything in docker. All logs are ingested  into one cluster, but apart from hiding some sensitive data, 95% of the log data visible to everyone at Bild, so dashboards can be shared/customized among multiple teams/users. One nice thing is that Bild are able to compare performance on a daily basis and see if anything has degraded. Of course, they can also do generic stuff like alerting on certain log patterns, and create smart queries, e.g. don’t alert until you see X number of these errors over Y time-frame. Splunk for docker licensing is priced on GB/TB of data ingested per day.

Docker Volume Drivers/Service/Plugins

On a walk around the EXPO at DockerCon, I noticed a whole range of storage companies (as well as some HCI companies) in attendance. Everyone from StorageIO, Nimble, Nutanix, Hedvig, NetApp and DELL/EMC, to name a few. It seems that they all now have their own docker volume plugin, which will allow them to create docker volumes and enable docker containers to consume those volumes on their own storage array. I’m not really sure what differentiates them to be honest. I guess they can use some semantics that are special to their particular array in some way. Of course VMware also has our own docker volume plugin, so if you run containers in a VM on top of ESXi, you can use our plugin to create a docker volume on VMFS, NFS or vSAN storage. Those volumes can then be consumed by containers running in the VMs. (I just did a quick check on the docker store, and there are something like 15 plugins for volumes currently available).

Veritas hyper-scale for containers

This was another short break-out session that I attended. This seemed to be another sort of volume play, or at least storage play for docker containers, but this is the first company that I’ve seen start making moves towards addressing the problem of backing up data in containers. There are multiple issues here. The first is data locality – in other words, is the container residing with its storage, or is it on some other host/node? Do I need to transfer that data before I can back it up? Should I be implementing some sort of backup proxy solution instead? The other challenge is how do you consistently backup micro-services, where there could be a web of inter-dependencies to get the state? This is exponentially more challenging than traditional monolithic/VM backup. Veritas hyper-scale for containers stated that they always host the container’s compute and volumes on the same node for data locality and backup, but there are some considerations on how to do that, especially on failures and restarts. Veritas are just getting started on this journey, but here is a link to where you learn more about what they are doing. It’ll be interesting to follow their progress. BTW, this only had a beta announcement at DockerCon, so don’t expect to be able to pick it up straight away (unless you join the beta).

And that was the end of my conference. I must say, it wasn’t what I expected. There was a lot more infrastructure and operational focus than I expected. Certainly it had a good buzz. And next year, DockerCon is going to be held in the Moscone Center in downtown San Francisco, so I guess they are expecting to grow even more over the coming 12 months. I’m sure they will.

Exit mobile version