Getting started with Photon OS and vSphere Integrated Containers

PHOTON_square140There has been a lot of news recently about the availability of vSphere Integrated Containers (VIC) v0.1 on GitHub. VMware has being doing a lot of work around containers, container management and the whole area of cloud native applications over the last while. While many of these projects cannot be discussed publicly there are two projects that I am going to look at here :

  • Photon OS – a minimal Linux container host designed to boot extremely quickly on VMware platforms.
  • vSphere Integrated Containers – a way to deploy containers on vSphere. This allows developers to create applications using containers, but have the vSphere administrator manage the required resources needed for these containers.

As I said, this is by no means the limit of the work that is going on. Possibly the best write-up I have seen discussing the various work in progress in this one here on the Next Platform site.

I will admit that I’m not that well versed in containers or docker, but I will say that I found Nigel Poulton’s Docker Deep Dive on PluralSight very informative. If you need a primer on containers, I would highly recommend watching this.

So what I am going to this in this post? In this post, I will walk through the deployment of the Photon OS, and then deploy VIC afterwards. You can then see for yourself how containers can be deployed on vSphere, and perhaps managed by a vSphere administrator while the developer just worries about creating the app, and doesn’t have to worry about the underlying infrastructure.

Part 1: Deploy Photon OS

There are three Photon OS distribution formats available for vSphere; a minimal ISO, a full ISO and an OVA (appliance). You can get them by clicking here. Of course, the OVA is the simplest way to get started. But you might like to use the full ISO method, which is the approach I took. This simply means creating a Linux VM, attaching the ISO to it, and going through the installation.Use the following guidelines:

  • Guest OS Family: “Linux”
  • Guest OS Version: “Other 3.x Linux (64-bit)”.
  • 2 vCPU
  • 2GB Memory (minimum)
  • 20GB Disk (minimum), recommend 40GB for building VIC later
  • Network interface with internet access

Once deployed, not that SSH is not enabled by default for root, so you will have to enable that too via the /etc/ssh/sshd_config file. Login as root (default password is changeme), change the root password when prompted to do so, uncomment the “PermitRootLogin” entry, and restart sshd as follows:

root [ ~ ]# systemctl restart sshd

Docker also needs to be started and enabled:

root [ ~ ]# systemctl start docker
root [ ~ ]# systemctl enable docker

 And that’s it. You can now start to run docker commands, deploy containers and run some cloud native applications. The example provided in the Photon OS docs in Nginx:

root [ ~ ]# docker run -d -p 80:80 vmwarecna/nginx

You can now point a browser at that container, and verify Nginx is up and running.

That was pretty painless, right? Now you are ready to deploy VIC using this Photon OS.

Part 2: Deploy vSphere Integrated Containers (VIC) v0.1

Now there are two ways to do this. The first method is to pull down a pre-compiled, ready-to-run version , and the second method is to build it yourself. If you are using the appliance approach, or the minimal ISO, a lot of commands and tools are missing. You will need to install the missing commands, such as git, wget, tar, gcc, etc. My good friend and colleague Bjoern has written a good post on how to get started with the ready-to-run version here. I am going to take another approach and build VIC myself.

To do that, we just do a “git clone” of VIC. If the git binaries are not installed, you will need to add them. To do that, run the following command in the Photon OS:

root [ ~ ]# tdnf install git
Installing:
 perl-DBIx-Simple       noarch  1.35-1.ph1tp2
 perl-DBD-SQLite        x86_64  1.46-1.ph1tp2
 perl-YAML      noarch  1.14-1.ph1tp2
 perl-DBI       x86_64  1.633-1.ph1tp2
 perl   x86_64  5.18.2-2.ph1tp2
 git    x86_64  2.1.2-1.ph1tp2
Is this ok [y/N]:y
Downloading 7592028.00 of 7592028.00
Downloading 17943120.00 of 17943120.00
Downloading 800663.00 of 800663.00
Downloading 67718.00 of 67718.00
Downloading 2081562.00 of 2081562.00
Downloading 38049.00 of 38049.00
Testing transaction
Running transaction

Complete!
root [ ~ ]#

Now we have all the bits we need to build. One thing to be aware of is the disk size. I was getting very close to using up all of the available space when using the appliance. This is why I would recommend folks to pull down the full ISO, create a Linux VM with a large enough VMDK/plenty of disk space, and install from there. Another option is to add another VMDK to the appliance, create a filesystem on it, mount it, and then use that for the build, but as I said, the appliance is missing a lot of tools, so will be more challenging.

Anyways, without further ado, here are the steps:

root [ ~ ]# git clone https://github.com/vmware/vic
Cloning into 'vic'...
remote: Counting objects: 8922, done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 8922 (delta 7), reused 0 (delta 0), pack-reused 8881
Receiving objects: 100% (8922/8922), 12.70 MiB | 4.79 MiB/s, done.
Resolving deltas: 100% (2911/2911), done.
Checking connectivity... done.

Change directory to /vic, and start the compilation:

root [ ~ ]# cd vic
root [ ~/vic ]# docker run -v $(pwd):/go/src/github.com/vmware/vic \
-w /go/src/github.com/vmware/vic golang:1.6 make all
Unable to find image 'golang:1.6' locally
1.6: Pulling from library/golang
.
.
<<-- this can take some time, and there is a lot of output -->>
.
.
Making bootstrap iso
Constructing initramfs archive
364232 blocks
xorriso 1.3.2 : RockRidge filesystem manipulator, libburnia project.

Drive current: -dev '/go/src/github.com/vmware/vic/bin/bootstrap.iso'
Media current: stdio file, overwriteable
Media status : is blank
Media summary: 0 sessions, 0 data blocks, 0 data, 14.1g free
xorriso : UPDATE : 7 files added in 1 seconds
Added to ISO image: directory '/'='/tmp/tmp.6lB4qQGr7I/bootfs'
xorriso : UPDATE : Writing:       8192s   23.9%   fifo 100%  buf  50%
xorriso : UPDATE : Writing:       8192s   23.9%   fifo 100%  buf  50%
ISO image produced: 34162 sectors
Written to medium : 34336 sectors at LBA 32
Writing to '/go/src/github.com/vmware/vic/bin/bootstrap.iso' completed 
successfully.

Building installer
root [ ~/vic ]#

OK. We have now successfully built VIC. Let’s go ahead and deploy a container on vSphere.

root [ ~/vic ]# cd bin
root [ ~/vic/bin ]# ls
appliance-staging.tgz  bootstrap.iso         install.sh         ...
appliance.iso          docker-engine-server  iso-base.tgz       ...
bootstrap-staging.tgz  imagec                port-layer-server  ...

The command we are interested in is install.sh. This creates our containers. What we need to do is to provide it with a target with is the login credentials and IP address of an ESXi host. We also need to provide a target datastore, and the name of the container (vic-01). the goal here is to deploy a container on an ESXi host:

 root [ ~/vic/bin ]# ./install.sh -g -t 'root:VMware123!@10.27.51.5' \
 -i vsanDatastore vic-01
# Generating certificate/key pair - private key in vic-01-key.pem
# Logging into the target
./install.sh: line 184: govc: command not found

Oops! I missed a step. We need to install govc (or go VC). govc is a vSphere CLI built on top of govmomi. Let’s sort that out next.I am going to place it in its own directory, and set up the GOPATH variable to point to it. You should consider putting this in .bash_profile of the root user so that it persists. The important step is the ‘go get’:

root [ ~/vic ]# pwd
/root/vic
root [ ~/vic ]# mkdir govmw
root [ ~/vic ]# cd govmw/
root [ ~/vic/govmw ]# pwd
/root/vic/govmw
root [ ~/vic/govmw ]# export GOPATH=/root/vic/govmw
root [ ~/vic/govmw ]# PATH=$PATH:$GOPATH/bin
root [ ~/vic/govmw ]# go get github.com/vmware/govmomi/govc
root [ ~/vic/govmw ]# ls
bin  pkg  src
root [ ~/vic/govmw ]# ls bin/
govc
root [ ~/vic/govmw ]#

OK, now we have govc, lets try once more to deploy a container:

root [ ~/vic/bin ]# ./install.sh -g -t 'root:VMware123!@10.27.51.5' \
-i vsanDatastore vic-01
# Generating certificate/key pair - private key in vic-01-key.pem
# Logging into the target
# Uploading ISOs
[06-04-16 12:46:05] Uploading... OK
[06-04-16 12:46:07] Uploading... OK
# Creating vSwitch
# Creating Portgroup
# Creating the Virtual Container Host appliance
# Adding network interfaces
# Setting component configuration
# Configuring TLS server
# Powering on the Virtual Container Host
# Setting network identities
# Waiting for IP information
#
# SSH to appliance (default=root:password)
# root@10.27.51.103
#
# Log server:
# https://10.27.51.103:2378
#
# Connect to docker:
docker -H 10.27.51.103:2376 --tls --tlscert='vic-01-cert.pem' \
--tlskey='vic-01-key.pem'
 
DOCKER_OPTS="--tls --tlscert='vic-01-cert.pem' \
--tlskey='vic-01-key.pem'"
DOCKER_HOST=10.27.51.103:2376
root [ ~/vic/bin ]#

That looks much better. There are even some docker commands provided which allow us to query the containers. Now, a lot of the docker calls have not been implemented, and will fail with errors similar to “Error response from daemon: vSphere Integrated Containers does not implement container.ContainerStop”.

root [ ~/vic/bin ]# docker -H 10.27.51.104:2376 --tls \
--tlscert='vic-02-cert.pem' --tlskey='vic-02-key.pem' info
Containers: 0
Images: 0
Storage Driver: Portlayer Storage
CPUs: 0
Total Memory: 0 B
Name: VIC
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: IPv4 forwarding is disabled.
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
root [ ~/vic/bin ]# docker -H 10.27.51.104:2376 --tls \
--tlscert='vic-02-cert.pem' --tlskey='vic-02-key.pem' version
Client:
 Version:      1.8.1
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   d12ea79
 Built:        Thu Aug 13 02:49:29 UTC 2015
 OS/Arch:      linux/amd64
Server:
 Version:      0.0.1
 API version:  1.23
 Go version:   go1.6
 Git commit:   -
 Built:        -
 OS/Arch:      linux/amd64
 Experimental: true
root [ ~/vic/bin ]#

Let’s have a look at this container in vSphere:

VIC in vSphereAnd that is basically it: containers with applications being created by the software developer, but using/consuming resources from vSphere and managed by the vSphere administrator. I know the majority of my readers are vSphere administrators. How does this approach to managing containers resonate with you folks?

Now, as you might suspect from a v0.1, these are the very first steps towards a far more integrated implementation. However, hopefully it gives you an idea of where we are going with this (it certainly helped me to understand). I’ve already seen some of this future integration and it looks really cool.

As you can see there are various ways to get started, and it is relatively painless. Why not give it a try?

16 Replies to “Getting started with Photon OS and vSphere Integrated Containers”

  1. Great guide on getting it started Cormac, so what can I do now its running?

    1. Well, Photon OS can be used to deploy containers, similar to how docker is used in other distros – but this distro has been tuned to run very well on VMware products like vSphere.

      As for VIC, I did say that there is not much you can do at this point. The main reason for posting the article (and I daresay the reason why we released a v0.1) is to show what we are moving towards. VIC in its current form is by no means production ready, but hopefully you get the idea where we are going, and what the end-goal is.

      1. Cormac, does the 0.1 version of VIC leverage VMFork / Instantclone for efficient 1 container to 1 VM deployments, or is that coming in a later release?

  2. Nice Article Cormac …I tried following up the same but get into issue with following error while executing

    docker run -v $(pwd):/go/src/github.com/vmware/vic \
    -w /go/src/github.com/vmware/vic golang:1.6 make all

    Any specific configuration need to be done. I installed PhotoOS TP2 getting IP Address from DHCP.

    ERROR :
    ===============================
    Building docker-engine-api server…
    building imagec…
    building tether-windows
    # github.com/vmware/vic/cmd/tether
    cmd/tether/main_windows.go:31: undefined: err
    cmd/tether/main_windows.go:32: undefined: err
    cmd/tether/main_windows.go:33: undefined: err
    cmd/tether/tether.go:98: undefined: utils.SetHostname
    Makefile:202: recipe for target ‘bin/tether-windows.exe’ failed
    make: *** [bin/tether-windows.exe] Error 2

    Thanks
    Prabhuraj

    1. Not sure what it is – I just ran through the exercise once more this morning, and it all worked fine. Let me ask if anyone else has seen it, and what my be the cause.

  3. Hi, thank you for the guide, I am having the next error trying with a Full ISO or with an appliance and the dependencies installed:
    Building docker-engine-api server…
    building imagec…
    building vicadmin
    building rpctool
    building tether-linux
    building tether-windows
    # github.com/vmware/vic/cmd/tether
    cmd/tether/main_windows.go:31: undefined: err
    cmd/tether/main_windows.go:32: undefined: err
    cmd/tether/main_windows.go:33: undefined: err
    cmd/tether/tether.go:98: undefined: utils.SetHostname
    Makefile:200: recipe for target ‘bin/tether-windows.exe’ failed
    make: *** [bin/tether-windows.exe] Error 2

    Do you have more information about it?

    Thank you !

    1. Sorry – no idea. Someone else reported it, but I cannot reproduce it. I rolled out a new full ISO version of Photon this morning, and it build VIC successfully. Let me ask some folks.

  4. Hello Cormac ,

    Windows containers are coming and i would like to ask Next Platform will also support it ?

    Not only Linux containers after Microsoft release we are expecting too much demand will be. Like how vCenter manager all Next Platform will support it too ?

    Regards
    VM

  5. HI Cormac,

    This was great to get things running at 0.1. I noticed that recently more support for docker commands was added, so went to rebuild.

    The build no longer creates ‘install.sh’, and Makefile doesn’t even include the target to ‘make install’ anymore. Do you have any insight on what we need to do now to get it running, or do the docs need to be updated?

Comments are closed.