Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers)

PHOTON_square140As many regular reader will be aware, I’ve been spending a lot of time recently on VMware’s Cloud Native App solutions. This is due to an internal program available to VMware employees called a Take-3. A Take-3 is where employees can take 3 months out of their current role and try a new challenge in another part of the company. Once we launched VSAN 6.2 earlier this year, I thought this would be an opportune time to try something different. Thanks to the support from the management teams in both my Storage and Availability BU (SABU) and the Cloud Native Apps BU (CNABU),  I started my Take-3 at the beginning of May. This is when my CNA articles on VIC (vSphere Integrated Containers) and Photon Controller first started to appear. Only recently I was asked an interesting question – when would I use VIC and when would I use Photon Controller? That is a good question, as both products enable customer to use containers on VMware products and solutions. So let me see if I can provide some guidance, as I asked the same question from some of the guiding lights in the CNABU.

When to use VIC?

Lets talk about VIC first, and why customer might like to deploy container workloads on VIC rather than something like “container in a VM”. Just to recap, VIC allows customers to run “container as a VM” in the vSphere infrastructure, rather than “container in a VM”. It can be deployed directly to a standalone ESXi host, or it can be deployed to vCenter Server. This has some advantages over the “container in a VM” approach.

Reason 1 – Efficiency

Consider an example where you have a VM which runs a docker daemon and launches lots of containers. Customers will now connect to these containers via docker client. Assume that over a period of time, this VM uses up a significant mount (if not all) of its memory for containers and eventually these containers are shutdown. This does not allow the memory consumed by the VM on behalf of the containers go back into a shared pool of memory (on the hypervisor where the VM is run) for other uses. With VIC, since we are deploying containers as VMs and using ESXi/hypervisor memory resource management, we do not have this issue. To think of this another way:- containers are potentially short-lived, whereas the “container host” is long-lived, and as such can end up making very inefficient use of system resources from the perspective of the global pool.

Now there is a big caveat to this, and it is the question of container packing and life-cycle management. If the container host VMs are well-packed with containers, and you also have control over the life-cycle of the container host, then it can still be efficient. If however there is no way to predict container packing on the container host and if over-provisioning is the result, and you also have no control over the life-cycle of the container host, then you typically don’t get very good resource efficiency.

Reason 2 – Muti-tenancy

There is no multi-tenancy in Docker. Therefore if 50 developers all requested a “container” development environment, a vSphere admin would have to deploy 50 Virtual Machines, one per developer. With VIC, we have the concept of a VCH (Virtual Container Host) which controls access to a pool of vSphere resources. A VCH is designed to be single-tenant, just like a Docker endpoint. Both present you with a per-tenant container namespace. However, with VIC, one can create very many VCHs, each with their own pool of resources. These VCH (resource pools), whether built on a single ESX host or vCenter Server, can be assigned to individual developers.

One could consider now that the vSphere admin is doing CAAS – Containers as a Service.

The 50 developers example is as much about efficiency as it is about tenancy – the fact that you can only have one tenant per container host VM will force you down a path of creating a large silo composed of 50 container host VMs. In the case where we’re comparing ESXi with Linux on the same piece of hardware to run container workloads, ESXi has a big advantage in that you can install as many VCHs as you like.

Reason 3 – Reducing friction between vSphere/Infra Admin and developer

On the main goals of VIC was basically not to have the developer worry about networking and security infrastructure with containers. This particular reason is more about how VIC informs and clarifies the boundaries between the vSphere admin and the developer. To put it simply, a container host VM is like a mini-hypervisor. Give that to a developer and they’re then on the hook for patching, network virtualization, storage virtualization, packing etc. within the container host. The container host is then also a “black box” to the infra folks which can lead to mistakes being made. E.g. “Only certain workloads are allowed on this secure network”. The secure network is configured at the VM level. If the VM is a container host, its hard to control or audit the containers that are coming up and down in that VM and which have access to that secure network.

VIC removes any infra concerns from the consumer of a VCH and allows for much more fine-grained control over access to resources. With VIC, each container gets its very own vNIC.A vSphere admin can also monitor resources that are being consumed on a per container basis.

There is one other major differentiator here with regards to the separation of administrator and developer roles which relates to compliance and auditing tools, and a whole list of process and procedures they have to follow as they run their data center. Without VIC developers end up handing over large VMs that are essentially black boxes of “stuff happening” to the infra team. This may include the like of overlay networks between those “black boxes”. It’s likely that most of the existing tools that the infra team use for compliance, auditing, etc will not work.

With VIC there is a cleaner line of demarcation. Since all the containers are run as VMs. and the vSphere admin already has tools setup to take care of true operationalizing of VMs, then they inherit this capability with containers.

Reason 4 – Clustering

Up until very recently, Docker Swarm has been very primitive when compared to vSphere HA and DRS clustering techniques as the Docker Swarm placement algorithm was simply using round-robin. I’ll qualify this by saying that Docker just announced a new Swarm mechanism that uses Raft consensus rather than round-robin at DockerCon ’16. However, there is still no consideration given to resource utilization when doing container placement. VCH, through DRS, has intelligent clustering built-in by its very nature. There are also significant considerations in this area when it comes to rolling upgrades/maintenance mode, etc.

Reason  5 – May not be limited to Linux

Since VIC virtualizes at the hardware layer, any x86 compatible operating system is, in theory, eligible for the VIC container treatment, meaning that it’s not limited to Linux. This has yet to be confirmed however, and we will know more closer to GA.

Reason 6 — Manage both VM based apps and CNA apps in the same infra

This is probably the reason that resonates with folks who are already managing vSphere environments. What do you do when a developer asks you to manage this new, container based app? Do you stand up a new silo just to do this? With VIC, you do not need to. Now you can manage both VMs and containers via the same “single pane of glass”.

When to use Photon Controller?

Let’s now talk about when you might use Photon Controller. Photon Controller allows you to pool a bunch of ESXi hosts and use them for the deployment of VMs with the sole purpose of running containers.

Reason 1 – No vCenter Server

This is probably the primary reason. If your proposed “container” deployment will not include the management of VMs but is only focused on managing containers, then you do not need a vCenter Server. Photon Controller does not need a vCenter Server, only ESXi hosts.  And when we position a VMware container solution on “greenfield” sites, we shouldn’t have to be introducing an additional management framework on top of ESXi such as vCenter. The Photon Controller UI will provide the necessary views into this “container” only environment, albeit containers that run on virtual machines.

Reason 2 – ESXi

ESXi is a world-renowned, reliable, best-in-class hypervisor, with a proven track record. If you want to deploy containers in production, and wish to run them in virtual machines, isn’t ESXi the best choice for such as hypervisor? We hear from many developers that they already use the “free” version of ESXi for developing container applications as it allows them to run various container machines/VMs of differing flavours. Plus it also allows them to run different frameworks (Swarm, Kubernetes, Mesos). It would seem to make sense to have a way to manage and consume our flagship hypervisor product for containers at scale.

Reason 3 – Scale

This brings us nicely to our next reason. Photon Controller is not limited by vSphere constructs, such as cluster (which is currently limited to 64 ESXi hosts). There are no such artificial limits with Photon Controller, and you can have as many ESXi hosts as you like providing resources for your container workloads. We are talking about 100s to 1000s of ESXi hosts here.

Reason 4 – Multi-tenancy Resource Management

For those of you familiar with vCloud Director, Photon Controller has some similar constructs for handling multi-tenancy. We have the concept of tenants, and within tenants there is the concept of resource tickets and projects. This facilitates multi-tenancy for containers, and allows resources to be allocated on a per tenant basis, and then on a per-project basis for each tenant. There is also the concept of flavors, for both compute and disk, where resources allocation and sizing of containers can be managed.

Reason 5 – Quick start with cluster/orchestration frameworks

As many of my blog posts on Photon Controller has shown, you can very quickly stand up frameworks such as Kubernetes, Docker Swarm and Mesos using the Photon Controller construct of “Cluster”. This will allow you to get started very quickly on container based projects. On the flip side, if you are more interested in deploying these frameworks using traditional methods such as “docker-machine” or “kube-up”, these are also supported. Either way, deploying these frameworks is very straight forward and quick.

Conclusion

I hope this clarifies the difference between the VIC and Photon Controller projects that VMware is undertaking. There are of course other projects on-going, such as Photon OS. It seems that understanding the difference between VIC and Photon Controller is not quite intuitive, so hopefully this post helps to clarify this in a few ways. One thing that I do want to highlight is that Photon Controller is not a replacement for vCenter Server. It does not have all of the features or services that we associate with vCenter Server, e.g. SRM for DR, VDP for backup, etc.

Many thanks to Ben Corrie and Mike Hall of the VMware CNA BU for taking some time out and providing me with some of their thoughts and ideas on the main differentiators between the two products.

6 Replies to “Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers)”

  1. Thanks for the very informative comparison. A short question – can Photon Controller pool ESXi hosts licensed with the “free” license?

    1. My understanding is that this should work in beta (which is what we are in right now). Whether that changes at GA, I honestly don’t know.

Comments are closed.