Site icon CormacHogan.com

My highlights from KubeCon and CloudNativeCon, Day #1, Europe 2019

This year, I have once again been fortunate enough to attend KubeCon and CloudNativeCon Europe, which is being held in Barcelona. This is my second such conference, having attended last years European event in Copenhagen. I was very interested in seeing how things have progressed, especially in the cloud native storage space.

The morning started with the usual set of keynotes. Dan Kohn and Cheryl Hung filled us in on what is happening in the Cloud Native Computing Foundation (CNCF) space, sharing details about the increase in membership and contributors since the last conference. Of note, there are 7700 attendees at this years conference in Barcelona. I believe last year’s attendee count in Copenhagen was just over 4,000, based on some searches that I did. That’s a pretty significant increase in attendee numbers once again.

Then it was onto VMware’s very own Bryan Liles to fill us in on what is happening with the various CNCF projects. The projects were broken out into Sandbox, Incubator and Graduated categories.

In the Sandbox stage, CNCF has just added OpenEBS. OpenEBS is a Container Attached Storage (CAS) solution, which I haven’t looked at in too much detail yet. From my limited understanding, it seems that they have a control plane for provisioning volumes and implementing various volume actions such as taking snapshots and making clones. From the data path perspective, an application running in a container accesses persistent volumes via an iSCSI target container which in turn accesses one or more Storage container(s). They also offer a choice of different storage engine implementations.

OpenEBS have also just announced a plugin for Velero, to allow snapshots to facilitate backup and restore of applications using OpenEBS storage. On that note, Velero 1.0 just GA’ed this week. Congrats to the whole Velero team on that achievement.

Bryan went on to tell us about a number of CNCF projects that were in the Incubator stage. There are quite a few. These  included Linkerd (Secure Service Mesh), Helm v3.0 (application packaging), Harbor v1.8 (container repository), Rook 1.0 (storage orchestration), CRI-O (container run-time), and finally the merging of OpenCensus and OpenTracing into a new product called OpenTelemetry. Interestingly enough, CRI-O is RedHat’s Lightweight Container RunTime which will now become the only Container RunTime supported in OpenShift (if I understood the presenter correctly).

Now, as Bryan said, there are far too may features in each of these project to talk about in the keynote. The same is true here in this post. Therefore I’ve tried to add links to additional blogs and sites which discuss these new features.

The last project that was mentioned by Bryan was a ‘Graduated’ project. This project is fluentd. We had Eduardo Silva from ARM deliver this presentation. This was great to see, as I also attended Eduardo’s very good presentation on fluentd last year at KubeCon. He told the audience that fluentd now has over 1000 plugins and is the de-facto for all cloud providers.

To recap, Bryan highlighted that there are now 16 sandbox + 16 incubator + 6 graduated CNCF projects – a total of 38 in all.

Before lunch, I went along to the Kubernetes Stateful Storage Workshop. This was hosted by David Zhu and Jan Safranek. They took us through the deployment of static and dynamic persistent volumes, touching on persistent volumes, persistent volume claims, and storage classes. We also looked at the difference between deployments (scaling of Pods only) and stateful sets (scale of Pods and volumes together). Lastly, we touched on the different sorts of services one could have for your cloud native application. There is standard which provides internal IP and DNS to Pods but no external access, there is load-balancer which provides internal IP and DNS to Pods with external IP access to the application, and then finally there is headless, where there is no internal IP and DNS handling but it does provide external IP access. We then provisioned a counter as a service, along with a cassandra cluster, and looked at all of these concepts. Pretty nifty 101 session.

After lunch, I attended the VMware SIG hosted by Steve Wong and David vonThenen. We were introduced to the new out-of-tree CSI driver from VMware. I’ll do a much more detail write-up about this driver as we get closer to the GA date, but in the session we were given some details about this new driver for persistent volumes on vSphere storage. If you weren’t aware, Kubernetes has requested that all of the in-tree persistent volume drivers from all vendors need to be moved out-of-tree and into a new plugin format which adheres to the Container Storage Interface (CSI) specification. VMware is no different, and eventually you will see the vSphere Cloud Provider (VCP), also known as Project Hatchway, become deprecated in place of this new CSI driver. There are a number of reasons for this, ranging from bloating Kubernetes to security issue (as the code has to run privileged in Kubernetes). Steve went on to highlight  a number of differences between the VCP and CSI, namely that the CSI driver uses First Class Disks, that it will support multi-vCenter and multi-Datacenter configurations, will support conventional volume mounts as well as raw mounts, and also includes zone support. After handing over to David, the audience was then given a live demo on the new CSI driver. We were also introduced to the CCM, Cloud Controller Manager. Using both CSI and CCM integrations, David demonstrated how he could instantiate an application and persistent volume on a particular vCenter/Datacenter in his vSphere infrastructure. Pretty cool.

Steve then went on to share a number of roadmap items that we have planned, many of which are also roadmap’ed in the CSI specification. These include volume resize, volume clone, snapshots, read/write for many volumes and also plans on how to migrate from the older VCP/Hatchway driver to the new CSI driver. As mentioned, I hope to write about this much more when we closer to GA. So be aware – if you are using an in-tree driver from any vendor, eventually you will have to consider moving to the new CSI format.

I closed my day with a catch-up on what is going on with Rook. I wrote about these guys previously when I attended their session at last year’s KubeCon. In a nutshell, they do container storage orchestration. As was mentioned in the keynote, Rook 1.0 is now released and has become an incubator project in CNCF. In this session, we were again told about the various operators that Rook has developed to make the deployment of a complicated storage platform such as Ceph very easy to deploy in K8s. Ceph can be very useful if you have some spare storage lying around, but are not in a position to purchase a storage system. However, getting to grips with the different components of Ceph, and how to configure them, is challenging. By using an orchestrator such as Rook, you can very easily roll out (in only 3 or 4 commands) a containerized Ceph deployment on Kubernetes, as demonstrated to us by Travis Nielson of RedHat. We were shown how to configure entries in some YAML files and roll them out. First, there is a common YAML file which gives Ceph privileged Pods to access underlying storage devices. Then we rolled out the operator itself. And finally, once the operator was in place, we saw how to deploy the Ceph cluster, and watched the various Mons and OSDs (storage daemons) get rolled out.

What was more interesting for me is the ecosystem that Rook are building. Ceph is now considered stable for use with Rook, EdgeFS is considered beta, with Minio S3 Object Stores, NFS, Cassandra and CockroachDB all in alpha.

All in all, a very good day indeed. I’m looking forward to the next couple of days as there are lots more storage related sessions on the agenda.

Exit mobile version