Container Volumes in VIC v0.4.0

I mentioned yesterday that VMware made vSphere Integrated Containers (VIC) v0.4.0 available. Included in this version is support for container volumes. Now, as mentioned yesterday, VIC is still a work in progress, and not everything has yet been implemented. In this post I want to step you through some of the enhancements that we have made around docker volume support in VIC. This will hopefully provide you with enough information so that you can try this out for yourself.

Continue reading

Getting Started with vSphere Integrated Containers v0.4.0

I’ve been working very closely with our vSphere Integrated Container (VIC) team here at VMware recently, and am delighted to say that v0.4.0 is now available for download from GitHub. Of course, this is still not supported in production, and is still in tech preview. However for those of you interested, it gives you an opportunity to try it out and see the significant progress made by the team over the last couple of months. You can download it from bintray. This version of VIC is bringing us closer and closer to the original functionality of “Project Bonneville” for running containers as VMs (not in VMs) on vSphere. The docker API endpoint now provides almost identical functionality to running docker anywhere else, although there is still a little bit of work to do. Let’s take a closer look.

Continue reading

Docker Volume Driver for vSphere – short video

I created a short, 5 minute video to highlight the different parts of the docker volume driver for vSphere “tech preview” to accompany the blog post I created earlier.

It will show what is needed on the ESXi host, and on the VM running containers.

Hope you find it useful.

Docker Volume Driver for vSphere

dockerThis is a really cool development. There is now a docker volume driver for vSphere which we just made public last night, and is now available for tech preview. This will allow customers to address persistent storage requirements for Docker containers in vSphere environments. Basically, it allows you to create a VMDK, and use this VMDK as a persistent storage volume for containers. In the following posts, I will outline the steps involved in getting started with Docker Volume Driver for vSphere. In essence, there are 4 steps:

  1. Install the docker volume plugin on ESXi host. I was running ESXi 6.0U2.
  2. Deploy Photon OS VM (although you can also use Ubuntu)
  3. Install the docker VMDK plugin on VM
  4. Create docker volume and run container to consume it

All the pieces to get you started are available on github here.

Continue reading

Getting started with Photon OS and vSphere Integrated Containers

PHOTON_square140There has been a lot of news recently about the availability of vSphere Integrated Containers (VIC) v0.1 on GitHub. VMware has being doing a lot of work around containers, container management and the whole area of cloud native applications over the last while. While many of these projects cannot be discussed publicly there are two projects that I am going to look at here :

  • Photon OS – a minimal Linux container host designed to boot extremely quickly on VMware platforms.
  • vSphere Integrated Containers – a way to deploy containers on vSphere. This allows developers to create applications using containers, but have the vSphere administrator manage the required resources needed for these containers.

As I said, this is by no means the limit of the work that is going on. Possibly the best write-up I have seen discussing the various work in progress in this one here on the Next Platform site.

I will admit that I’m not that well versed in containers or docker, but I will say that I found Nigel Poulton’s Docker Deep Dive on PluralSight very informative. If you need a primer on containers, I would highly recommend watching this.

So what I am going to this in this post? In this post, I will walk through the deployment of the Photon OS, and then deploy VIC afterwards. You can then see for yourself how containers can be deployed on vSphere, and perhaps managed by a vSphere administrator while the developer just worries about creating the app, and doesn’t have to worry about the underlying infrastructure.

Continue reading

Some upcoming speaking engagements

A short post to let you know about some upcoming speaking engagements that I am doing over the next couple of weeks.

techugFirst up, I will be speaking at the TechUG, or Technology User Group event next week. This event will be held on Thursday, November 26th. It will be held in the Westin Hotel in the heart of Dublin city, Ireland. There is a really good agenda for this event (which is not a VMware centric event), that you can find at this link here. I personally will be speaking about Virtual SAN (VSAN), VMware’s hyper-converged compute and storage platform. This will be more of an introductory type session, but I’ll also be giving an overview of new and upcoming features and where we are thinking about going next with VSAN. You can find the Dublin TechUG registration link here.

vmug-logoMy next session is at the VMUGDK Usercon or Nordics Usercon, which will be held on Tuesday, December 1st. This event will take place at the Scandic Hotel in Copenhagen, Denmark. This year I will return to my roots and talk about core vSphere storage enhancements over the past few releases, and also a look at some upcoming  plans. No VSAN, VVol or anything like that – this will be a discussion on VMFS, NFS, VAAI, PSA, etc. The Nordic UserCon details can be found at this link here. The registration link is at the same location.

If you are in the Dublin or Copenhagen area for any of these events, I’d love to see you there. I plan to spend most of the day at both events, so if there are any VSAN or vSphere storage questions or feedback that you’d like to give me, I’d be delighted to talk with you in person.

Error while adding NFS mount: NFS connection limit reached!

The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?

Continue reading