My first look at Unikernels on vSphere
Dear reader, if you are like me, you may only be getting to grips with containers and how they compare to the virtual machine approach of running applications. While there has been a lot of buzz around containers, I’ve heard some rumblings around Unikernels, but to be honest, haven’t really been paying too much attention to them. That was until my recent visit to Norway, where I was speaking at the Oslo VMUG. One of the sessions delivered at that VMUG was by Per Buer who is CEO of a company called IncludeOS. IncludeOS are one of a handful of companies who are focused on developing Unikernels. In fact, they are only a few short months away on putting their first Unikernel product into production. And guess what? This Unikernel is deployed as a VM on ESXi! I had a follow-up conversation with Per to talk about this approach to doing Unikernels, and I think you’ll find what he has to say very interesting.
A bit of back history
Per has been software designer/architect for some time, having previously founded Varnish Cache for web application acceleration. He tells a very engaging story about issues his teams have encountered doing software development in the past, such as kernel compilation, maintaining builds with correct packages, making sure that their different software products were able to run on multiple different platforms. Basically a PITA. And then along came containers, which made packaging and portability so much easier. But containers had their own set of issues. Each containerized application has an implicit dependency on the base operating system, so if something needed to be changed in the base OS for one containerized app, it could foo-bar other containerized apps on the same host. This seems to be a particular issue for maintaining optimal performance for containerized apps, according to Per. Another big issue is that isolation is very weak in containers, and Per mentioned reports of container “break out”. In many cases, another layer of “isolation” needs to be put in place when using containers. The security aspect of containers is possibly the biggest concern. And this is where Unikernels comes in. In the first place, there is no General Purpose Operating System, such a Linux, in a Unikernel. Unikernels are a compiled application with a minimal set necessary binaries, libraries and drivers included to allow it to run. Since Unikernels can be created without any user interfaces, it makes them very difficult to attack. Another goal of Unikernels is to have a universal binary that will deploy on any platform. And without getting too deep into the weeds, another advantage of Unikernels is that there are no system calls between the application (which traditionally would have run in user space) and the kernel (which runs in privileged mode). The application is now the kernel. And since there is no OS, these things fire up very quickly indeed.
IncludeOS Unikernels on vSphere
So how did IncludeOS get to develop a Unikernel that runs on vSphere? This is another interesting story (Per is a great story-teller). One of our Norwegian Service Provider partners (Basefarm) attended a VMware event where Unikernels were discussed as a technology trend. They started to do some research of their own after the event and discovered that IncludeOS were literally next door to their offices in Oslo. Basefarm and IncludeOS decided on a use case that they could work together on. They would replace their current hardware based network load-balancers with Unikernels from IncludeOS. There were many reasons for this – cost, complexity and performance. With a grant from the Norwegian government, the project began. Although initially the work was started on a different infrastructure platform, they eventually moved to developing the Unikernel for vSphere. The only VMware specific work was to create a VMXNET3 driver for the Unikernel, which Per said didn’t take very long at all. The plan is to run these in Basefarm’s cloud and my understanding is that this should go-live in the H2 of 2017. This load-balancer Unikernel will have 3 network instances – one for the admin network, one for the public network and the third for the private network. To use it, you simply create a VM, but now instead of installing a Guest OS + patches+ application, there is now just the Unikernel to be deployed. IncludeOS do all their work in C++ (this makes it easier to do low-level, OS type work when compared to a combination of C and assembly code, according to Per). So the load-balancers in Unikernels will be light-weight, secure, and very fast.
What are the other use cases?
There is no disk driver in the IncludeOS Unikernel yet – but that is not to say it cannot be done. Right now, the focus is on networking and statelessness, and Per has told me that they are already looking at doing a Unikernel Router and a Unikernel Firewall. With Unikernels, the image is recompiled every time it is changed. For something like a firewall, they would take the set of firewall rules, trans-pile them into C++ and then compile everything into the Unikernel. The firewall rules are really just a bunch of if statements (e.g. if the source is this IP address, and if it is allowed to talk to this destination, then let the traffic be routed). But these statements/rules are now translated into binary code which runs in the Unikernel. Not that they expect you to do this – they have an external toolkit to initiate all of this. To quote Per, we’re simply taking out the General Purpose Operating System and replacing it with a Unikernel running in a VM. And since it is in a VM, you have all those core vSphere features at your disposal – vSphere HA, DRS, vMotion, etc, as well as the lower level stuff such as memory management, CPU scheduling, etc.
Pros and Cons
Without trying to state the obvious, having a single image that encompasses the application and the absolute minimum set of OS features required to run on bare metal or a hypervisor has many benefits which Per outlined in his presentation – small footprint, portability, increased security, better performance. Of course, we have to balance this with the fact that you might often have to re-invent a lot of the OS constructs in Unikernels, depending on the application requirements. And then as the requirements of the application become more complex, these need to be included in the Unikernel. I guess what I am trying to say is, just like with containers, there may be some applications such as network services (load-balancers, protocol stacks, etc) that make perfect sense for Unikernels, and other applications that do not.
This was one of the most engaging presentations that I had seen in a long time, and something that made perfect sense for certain use cases. Per is not a marketing guy – he is an architect, but he completely sold me on Unikernels for certain use-cases. Like I mentioned, Per tells a great story, and is very engaging. You can tell he is passionate about this topic, without coming across as having “drank the kool-aid”. I’m hoping to have Per come to speak at some more VMUGs around Europe (and maybe further afield). If you’re a VMUG leader and would like to be introduced to Per with a view of having him speak at your VMUG about real world Unikernels running in VMs on vSphere, please let me know. In the meantime, Per very graciously made a short 15 minute recording of his Unikernel story, which you can watch on YouTube here. In Per’s own words, “this is a low-level production”, but if you want a quick, concise overview of Unikernels, this is a good place to start:
Question for the readers
Have you looked at Unikernels running on vSphere? If so, who have you been speaking to? And what is the use-case? I’m very interested to hear what other VMware customers are doing with Unikernels.
9 Replies to “My first look at Unikernels on vSphere”
It think it makes a lot of sense, but we need a disk driver and long-term storage support. Otherwise this is just an illusion.
Phillip, there is currently IDE and virtio-disk support. So we can read local storage. We haven’t yet seen the need to write to disk so we haven’t really needed it until now. The whole idea of “immutable infrastructure” goes against persisting to local storage.
But expanding the IDE support to support writing should be reasonable simple to do.
The question is, why write to local storage? Networked storage and database access has until now proven sufficient to do what we want to do.
Writing to disk in a non-trivial application would also mean we’d have to add a more sophisticated filesystem (we’re currently using FAT for things web assets and configuration files). This is something I’m not certain about being a good thing. Unikernels aren’t really a replacement for all general purpose operating systems. They are better suited for VM’s with a narrower focus.
If my memory serves right, having applications as part of kernel threads is what RealTime Operating Systems do, to attain everything deterministic. WindRiver’s VxWorks RTOS is one that comes to my mind. Wondering how Unikernel is different from an RTOS ?!
Unikernels don’t leverage the high resolution timers to facilitate preemptiveness. IncludeOS cannot interrupt handling of a packet, even if that handling takes milliseconds.
An RTOS would need to make guarantees that an interrupt will be handled with in X microseconds. Without a way of kicking what is currently monopolizing the CPU off I think it would be hard to implement an RTOS.
Preempting stuff is expensive. Since IncludeOS is currently focused on running on top of a hypervisor adding real time support would be moot as the hypervisor cannot guarantee much to the vm.
Understood. And I agree with everything you said. I was wondering if you’re trying to build an RTOS on top of a Hypervisor ?! I am glad you’re not 🙂 As much I could tell, the programming paradigm completely changes with Unikernels. Publishing “How To Do” articles with examples will certainly help people understand better and gain mindshare. Just a thought. Thanks!
Is there a benefit to installing vmtools inside a unikernel?
AFAIK – no. It doesn’t make much sense to do so currently. A unikernel image is single process – there is only a single process inside the VM.
IncludeOS already as a paravirtualized driver for VMXNET3 – so network performance should be very good. Disk IO support can be improved with paravirtualized drivers, but currently IncludeOS isn’t well suited to disk intensive workloads.
There might be other advantages of improving support for ESXi, perhaps some monitoring support, but I’m not aware of such advantages. I’d love to hear more about what the possibilities and perhaps even how this can be implemented.
What about vmtool’s ability to gracefully shutdown and restart VMs via vCenter – or is that pointless with a Unikernel?
That is mostly pointless. Since most unikernel deployments are for stateless services there is little cleanup to be done. For the project we’re doing with Basefarm, load balancing and firewalling, the unikernel cannot in any way do anything graceful to help shutdown. That would be completely the same no matter what sort of load balancer you’re running.
The moment we start doing disk IO things change, naturally. At that point we might have buffered IO that we would like to synchronize to disk before we die. But we’re not there yet.
Comments are closed.