CormacHogan.com http://cormachogan.com Storage and Virtualization Thu, 28 Jul 2016 08:27:13 +0000 en-US hourly 1 Upcoming #vBrownBag EMEA Appearance – July 26th at 7pm BST http://cormachogan.com/2016/07/25/upcoming-vbrownbag-emea-appearance-july-26th-7pm-bst/ http://cormachogan.com/2016/07/25/upcoming-vbrownbag-emea-appearance-july-26th-7pm-bst/#respond Mon, 25 Jul 2016 13:00:25 +0000 http://cormachogan.com/?p=7063 As my take-3 tenure in the VMware Cloud Native Apps (CNA) team draws to a close, the guys over at #vBrownBag have kindly invited me to come on their show and talk about the various VMware project and initiatives that Continue reading

The post Upcoming #vBrownBag EMEA Appearance – July 26th at 7pm BST appeared first on CormacHogan.com.

]]>
vbrownbagAs my take-3 tenure in the VMware Cloud Native Apps (CNA) team draws to a close, the guys over at #vBrownBag have kindly invited me to come on their show and talk about the various VMware project and initiatives that I have been lucky enough to be involved with. All going well, I hope to be able to demonstrate the Docker Volume Driver for vSphere, some overview of Photon Controller CLI and Photon Platform with Docker Swarm, and maybe Kubernetes as well as some vSphere Integrated Containers (VIC). If you are interested, you can register here. I’d be delighted if you can make it. The show is on at 7pm (local time) tomorrow, Tuesday July 26th. See you there.

The post Upcoming #vBrownBag EMEA Appearance – July 26th at 7pm BST appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/25/upcoming-vbrownbag-emea-appearance-july-26th-7pm-bst/feed/ 0
Container Networks in VIC v0.4.0 http://cormachogan.com/2016/07/20/container-networks-vic-v0-4-0/ http://cormachogan.com/2016/07/20/container-networks-vic-v0-4-0/#comments Wed, 20 Jul 2016 13:00:52 +0000 http://cormachogan.com/?p=6995 This is part of a series of articles describing how to use the new features of vSphere Integrated Containers (VIC) v0.4.0. In previous posts, we have looked at deploying your first VCH (Virtual Container Hosts) and container using the docker Continue reading

The post Container Networks in VIC v0.4.0 appeared first on CormacHogan.com.

]]>
docker networksThis is part of a series of articles describing how to use the new features of vSphere Integrated Containers (VIC) v0.4.0. In previous posts, we have looked at deploying your first VCH (Virtual Container Hosts) and container using the docker API. I also showed you how to create some volumes to provide consistent storage for containers. In this post, we shall take a closer look at networking, and what commands are available to do container networking. I will also highlight some areas where there is still work to be done.

Also, please note that VIC is still not production ready. The aim of these posts is to get you started with VIC, and help you to familiarize yourself with some of the features. Many of the commands and options which work for v0.4.0 may not work in future releases, especially the GA version.

I think the first thing we need to do is to describe the various networks that may be configured when a Virtual Container Host is deployed.

Bridge network         

The bridge network identifies a private port group for containers. This is a network used to support container to container communications. IP addresses on this network are managed by the VCH appliance VM and it’s assumed that this network is private and only the containers are attached to it. If this option is omitted from the create command, and the target is an ESXi host, then a regular standard vSwitch will be created with no physical uplinks associated with it. If the network is omitted, and the target is a vCenter server, an error will be displayed as a distributed port group is required and needs to exist in advance of the deployment. This should be dedicated and must not be the same as any management, external or client network.

Management network     

The management network  identifies the network that the VCH appliance VM should use to connect to the vSphere infrastructure. This must be the same vSphere infrastructure identified in the target parameter. This is also the network over which the VCH appliance VM will receive incoming connections (on port 2377) from the ESXi hosts running the “containers as VMs”. This means that (a) the VCH appliance VM must be able to reach the  vSphere API and (b) the ESXi hosts running the container VMs must be able to reach the VCH appliance VM (to support the docker attach call).

External network       

The external network. This is a VM portgroup/network on the vSphere environment on which container port forwarding should occur, e.g.   docker run –p 8080:80 –d tomcat will expose port 8080 on the VCH appliance VM (that is serving the DOCKER_API) and forward connections from the identified network to the tomcat container. If –-client-network is specified as a different VM network, then attempting to connect to port 8080 on the appliance from the client network will fail. Likewise attempting to connect to the docker API from the external  network will also fail. This allows some degree of control over how exposed the dockerAPI is while still exposing ports for application traffic. It defaults to the “VM Network”.

Client network         

The client network. This identifies a VM portgroup/network on the vSphere environment that is the network which has access to the DOCKER_API.   If not set, It defaults to the same network as the external network.

Default Container network

This is the name of a network that can be used for inter-container communication instead of the bridge network. It must use the name of an existing distributed port group when deploying VIC to a vCenter server target. An alias can be specified, but if not specified the alias is set to the name of the port. The alias is used when specifying the container network DNS, the container network gateway, and a container network ip address range. This allows multiple container networks to be specified. The defaults are 172.16.0.1 for DNS server and gateway and 172.16.0.0/16 for the ip address range. If container network is not specified, the bridge network is used by default.

This network diagram, taken from the official VIC documentation on github, provides a very good overview of the various VIC related networks:

vic-network_diagramLet’s run some VCH deployment examples with some different network options. First, I will not specify any network options which means that management and client will share the same network as external, which defaults to the VM Network. My VM Network is attached to VLAN 32, and has a DHCP server to provide IP address. Here are the results.

root@photon-NaTv5i8IA [ /workspace/vic ]# ./vic-machine-linux create  \
--bridge-network Bridge-DPG \
--image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:zzzzzzz@10.27.51.103'  \
--compute-resource Mgmt
INFO[2016-07-15T09:31:59Z] ### Installing VCH ####
.
.
INFO[2016-07-15T09:32:01Z] Network role client is sharing NIC with external
INFO[2016-07-15T09:32:01Z] Network role management is sharing NIC with external
.
.
INFO[2016-07-15T09:32:34Z] Connect to docker:
INFO[2016-07-15T09:32:34Z] docker -H 10.27.32.113:2376 --tls info
INFO[2016-07-15T09:32:34Z] Installer completed successfully

Now, let deploy my external network on another network. This time it is VM Network “VMNW51”. This is on VLAN 51, which also has a DHCP server to provide addresses. Note once again that the client and external network use the same network as the external network.

root@photon-NaTv5i8IA [ /workspace/vic ]# ./vic-machine-linux create \
--bridge-network Bridge-DPG \
--image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:zzzzz@10.27.51.103' \
--compute-resource Mgmt \
--external-network VMNW51"
INFO[2016-07-14T14:43:04Z] ### Installing VCH ####
.
.
INFO[2016-07-14T14:43:06Z] Network role management is sharing NIC with client
INFO[2016-07-14T14:43:06Z] Network role external is sharing NIC with client
.
.
INFO[2016-07-14T14:43:44Z] Connect to docker:
INFO[2016-07-14T14:43:44Z] docker -H 10.27.51.47:2376 --tls info
INFO[2016-07-14T14:43:44Z] Installer completed successfully

Now let’s try an example where the external network is on VLAN 32 but the management network is on VLAN 51. Note now that there is not message about management network sharing NIC with client.

root@photon-NaTv5i8IA [ /workspace/vic ]# ./vic-machine-linux create \
--bridge-network Bridge-DPG \
--image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:zzzzzzz@10.27.51.103' \
--compute-resource Mgmt \
--management-network "VMNW51" \
--external-network "VM Network"
INFO[2016-07-15T09:40:43Z] ### Installing VCH ####
.
.
INFO[2016-07-15T09:40:45Z] Network role client is sharing NIC with external
.
.
INFO[2016-07-15T09:41:24Z] Connect to docker:
INFO[2016-07-15T09:41:24Z] docker -H 10.27.33.44:2376 --tls info
INFO[2016-07-15T09:41:24Z] Installer completed successfully
root@photon-NaTv5i8IA [ /workspace/vic ]#

Let’s examine the VCH appliance VM from a vSphere perspective:

VCH NetworkSo we can see 3 adapters on the VCH – 1 is the external network, 2 is the management network and 3 is the bridge network to access the container network. And finally, just to ensure that we can deploy a container with this network configuration, we will do the following:

root@photon-NaTv5i8IA [ /workspace/vic ]# docker -H 10.27.33.44:2376 --tls \
run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
a3ed95caeb02: Pull complete
8ddc19f16526: Pull complete
Digest: sha256:65ce39ce3eb0997074a460adfb568d0b9f0f6a4392d97b6035630c9d7bf92402
Status: Downloaded newer image for library/busybox:latest

/ #

Now that we have connected to the container running the busybox image, let’s examine its networking:

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
    link/ether 00:50:56:86:18:b6 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe86:18b6/64 scope link
       valid_lft forever preferred_lft forever
/ #
/ # cat /etc/resolv.conf
nameserver 172.16.0.1
/#
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
/ #

So we can see that it has been assigned an ip address of 172.16.0.2, and that the DNS server and gateway are set to 172.16.0.1. This is the default container network in VIC.

Alternate Container Network

Let’s now look at creating a completely different container network. To do this, we use some additional vic-machine command line arguments, as shown below:

./vic-machine-linux create --bridge-network Bridge-DPG \
--image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:VMware123!@10.27.51.103' \
--compute-resource Mgmt \
--container-network con-nw:con-nw \
--container-network-gateway con-nw:192.168.100.1/16 \
--container-network-dns con-nw:192.168.100.1 \
--container-network-ip-range con-nw:192.168.100.2-100

The first thing to note is that since I am deploying to a vCenter Server target, the container network must be a distributed portgroup. In my case, it is called con-nw, and the parameter –container-network specifies which DPG to use. You also have the option of adding an alias for the network, separated from the DPG with “:”. This alias can then be used in other parts of the command line. If you do not specify an alias, the full name of the DPG must be used in other parts of the command line. In my case, I made the alias the same as the DPG.

[Note: this is basically an external network, so the DNS and gateway, as well as the range of consumable IP addresses for containers must be available through some external means – containers are simply consuming them, and VCH will not provide DHCP or DNS services on this external network]

Other commands are necessary to specify the gateway, DNS server and IP address range for this container network. CIDR, and ranges both work. Note however that the IP address range must not include the IP address for the gateway or DNS server, which is why I have specified a range. Here is the output from running the command:

INFO[2016-07-20T09:06:05Z] ### Installing VCH ####
INFO[2016-07-20T09:06:05Z] Generating certificate/key pair - private key in \
./virtual-container-host-key.pem
INFO[2016-07-20T09:06:07Z] Validating supplied configuration
INFO[2016-07-20T09:06:07Z] Firewall status: DISABLED on /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-20T09:06:07Z] Firewall configuration OK on hosts:
INFO[2016-07-20T09:06:07Z] /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-20T09:06:07Z] License check OK on hosts:
INFO[2016-07-20T09:06:07Z] /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-20T09:06:07Z] DRS check OK on:
INFO[2016-07-20T09:06:07Z] /CNA-DC/host/Mgmt/Resources
INFO[2016-07-20T09:06:08Z] Creating Resource Pool virtual-container-host
INFO[2016-07-20T09:06:08Z] Creating appliance on target
ERRO[2016-07-20T09:06:08Z] unable to encode []net.IP (slice) for \
guestinfo./container_networks|con-nw/dns: net.IP is an unhandled type
INFO[2016-07-20T09:06:08Z] Network role client is sharing NIC with external
INFO[2016-07-20T09:06:08Z] Network role management is sharing NIC with external
ERRO[2016-07-20T09:06:09Z] unable to encode []net.IP (slice) for \
guestinfo./container_networks|con-nw/dns: net.IP is an unhandled type
INFO[2016-07-20T09:06:09Z] Uploading images for container
INFO[2016-07-20T09:06:09Z] bootstrap.iso
INFO[2016-07-20T09:06:09Z] appliance.iso
INFO[2016-07-20T09:06:14Z] Registering VCH as a vSphere extension
INFO[2016-07-20T09:06:20Z] Waiting for IP information
INFO[2016-07-20T09:06:41Z] Waiting for major appliance components to launch
INFO[2016-07-20T09:06:41Z] Initialization of appliance successful
INFO[2016-07-20T09:06:41Z]
INFO[2016-07-20T09:06:41Z] Log server:
INFO[2016-07-20T09:06:41Z] https://10.27.32.116:2378
INFO[2016-07-20T09:06:41Z]
INFO[2016-07-20T09:06:41Z] DOCKER_HOST=10.27.32.116:2376
INFO[2016-07-20T09:06:41Z]
INFO[2016-07-20T09:06:41Z] Connect to docker:
INFO[2016-07-20T09:06:41Z] docker -H 10.27.32.116:2376 --tls info
INFO[2016-07-20T09:06:41Z] Installer completed successfully

Ignore the “unable to encode” errors – these will be removed in a future release. Before we create our first container, lets examine the networks:

root@photon-NaTv5i8IA [ /workspace/vic ]# docker -H 10.27.32.116:2376 --tls \
network ls
NETWORK ID          NAME                DRIVER
8627c6f733e8        bridge              bridge
c23841d4ac24        con-nw              external

Run a Container on the Container Network

Now we can run a container (a simple busybox one) and specify our newly created “con-nw”, as shown here:

root@photon-NaTv5i8IA [ /workspace/vic040 ]# docker -H 10.27.32.116:2376 --tls \
run -it --net=con-nw busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
a3ed95caeb02: Pull complete
8ddc19f16526: Pull complete
Digest: sha256:65ce39ce3eb0997074a460adfb568d0b9f0f6a4392d97b6035630c9d7bf92402
Status: Downloaded newer image for library/busybox:latest
/ # 

Now lets take a look at the networking inside the Container:

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq qlen 1000
 link/ether 00:50:56:86:51:bc brd ff:ff:ff:ff:ff:ff
 inet 192.168.100.2/16 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::250:56ff:fe86:51bc/64 scope link
 valid_lft forever preferred_lft forever
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0
192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
/ # cat /etc/resolv.conf
nameserver 192.168.100.1
/ #

So it looks like all of the container network settings have taken effect. And if we look at this container from a vSphere perspective, we can see it is attached to the con-nw DPG:

con-nwThis is one of the advantages of VIC – visibility into container network configurations (not just a block-box).

Some caveats

As I keep mentioning, this product is not yet production ready, but it is getting close. The purpose of these posts is to give you a good experience if you want to try out v0.4.0 right now. With that in mind, there are a few caveats to be aware of.

  1. Port exposing/Port mapping is not yet working. If you want to run something like a web server type app (e.g. Nginx) in a container and have its ports mapped through the docker API endpoint (which is a popular application to test/demo with) , we cannot do this at the moment.
  2. You saw the DNS encoding errors in the VCH create flow – these are cosmetic and can be ignored. These will get fixed.
  3. The gateway CIDR works with /16 but not /24. Stick with a /16 CIDR for the moment for your testing.

With those items in mind, hopefully there is enough information here to allow you to get some experience with container networks in VIC. Let us know if you run into any other issues.

The post Container Networks in VIC v0.4.0 appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/20/container-networks-vic-v0-4-0/feed/ 4
Deploy Mesos on Photon Controller (video) http://cormachogan.com/2016/07/19/deploy-mesos-photon-controller-video/ http://cormachogan.com/2016/07/19/deploy-mesos-photon-controller-video/#respond Tue, 19 Jul 2016 13:00:20 +0000 http://cormachogan.com/?p=7033 This video will show you the steps involved in deploying Apache Mesos on VMware’s Photon Controller product using the “cluster” mechanism available in Photon Controller. It uses Photon Controller CLI to create a tenant, resource ticket and a project. It Continue reading

The post Deploy Mesos on Photon Controller (video) appeared first on CormacHogan.com.

]]>
mesosThis video will show you the steps involved in deploying Apache Mesos on VMware’s Photon Controller product using the “cluster” mechanism available in Photon Controller. It uses Photon Controller CLI to create a tenant, resource ticket and a project. It then shows how to create an appropriate image for VMs to run Mesos, how to enable the Photon Controller deployment for Mesos clusters, and finally the creation of the cluster. After the deployment has succeeded, you are shown some command outputs and Photon Controller UI views of the running cluster. I decided to pick Mesos in this case, as I have already written a lot on Docker Swarm and Kubernetes, and have shown how to deploy these both natively, and using the Photon Controller “canned” cluster mechanism.

*** Note that at the time of writing, stand-alone Photon Controller is still not GA ***

*** Steps highlighted in this video may change in the GA version of the product ***


The video is just over 13 minutes in length. If you want to read up on the actual steps, or you wish to learn about how to use Marathon for a simple container demo, this blog post I created previously might be useful.

For all of my Cloud Native Apps articles, click this link.

The post Deploy Mesos on Photon Controller (video) appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/19/deploy-mesos-photon-controller-video/feed/ 0
Getting started with vSphere Integrated Containers (short video) http://cormachogan.com/2016/07/18/getting-started-vsphere-integrated-container-short-video/ http://cormachogan.com/2016/07/18/getting-started-vsphere-integrated-container-short-video/#comments Mon, 18 Jul 2016 13:15:39 +0000 http://cormachogan.com/?p=7021 I decided to put together a very short video on VIC – vSphere Integrated Containers v0.4.0. In the video, I show you how to create your very first VCH (Virtual Container Host) and then I show you how you can Continue reading

The post Getting started with vSphere Integrated Containers (short video) appeared first on CormacHogan.com.

]]>
I decided to put together a very short video on VIC – vSphere Integrated Containers v0.4.0. In the video, I show you how to create your very first VCH (Virtual Container Host) and then I show you how you can create a very simple container using a docker API endpoint. I also show you how this is reflected in vSphere. Of course, VIC v0.4.0 is still a tech preview, and is not ready for production. Also note that a number of things may change before the VIC becomes generally available (GA). However, hopefully this is of interest to those of you who wish to get started with v0.4.0.



For more information on VIC v0.4.0, visit us on github.

The post Getting started with vSphere Integrated Containers (short video) appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/18/getting-started-vsphere-integrated-container-short-video/feed/ 4
Container Volumes in VIC v0.4.0 http://cormachogan.com/2016/07/15/container-volumes-vic-v0-4-0/ http://cormachogan.com/2016/07/15/container-volumes-vic-v0-4-0/#respond Fri, 15 Jul 2016 13:00:04 +0000 http://cormachogan.com/?p=6981 I mentioned yesterday that VMware made vSphere Integrated Containers (VIC) v0.4.0 available. Included in this version is support for container volumes. Now, as mentioned yesterday, VIC is still a work in progress, and not everything has yet been implemented. In Continue reading

The post Container Volumes in VIC v0.4.0 appeared first on CormacHogan.com.

]]>
I mentioned yesterday that VMware made vSphere Integrated Containers (VIC) v0.4.0 available. Included in this version is support for container volumes. Now, as mentioned yesterday, VIC is still a work in progress, and not everything has yet been implemented. In this post I want to step you through some of the enhancements that we have made around docker volume support in VIC. This will hopefully provide you with enough information so that you can try this out for yourself.

To begin with, you need to ensure that a “volume store” is created when the VCH (Virtual Container Host) is deployed. This is a datastore and folder where the volumes are stored, and for ease of use, you can apply a label to it. One useful tit-bit here is that if you use the label “default” for your volume store, you do not have to specify it in the docker volume command line. For example, if I deploy a VCH as follows, note the –volume-store parameter. The format is “label:datastore/folder-on-datastore”. Here I have requested that the volume store be placed on the isilion-nfs-01 datastore in the folder called docker-vols. I have also labeled it as “default”. For information on the other parameters, refer to my previous post – Getting started with VIC v0.4.0.

root@photon [ /workspace/vic ]# ./vic-machine-linux create  \
--bridge-network Bridge-DPG --image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:zzzzz@10.27.51.103'  \
--compute-resource Mgmt --external-network VMNW51 --name VCH01 \
--volume-store "default:isilion-nfs-01/docker-vols"
INFO[2016-07-14T11:25:13Z] ### Installing VCH ####
INFO[2016-07-14T11:25:13Z] Generating certificate/key pair - private key in ./VCH01-key.pem
INFO[2016-07-14T11:25:13Z] Validating supplied configuration
INFO[2016-07-14T11:25:13Z] Firewall status: DISABLED on /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T11:25:13Z] Firewall configuration OK on hosts:
INFO[2016-07-14T11:25:13Z]   /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T11:25:13Z] License check OK on hosts:
INFO[2016-07-14T11:25:13Z]   /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T11:25:13Z] DRS check OK on:
INFO[2016-07-14T11:25:13Z]   /CNA-DC/host/Mgmt/Resources
INFO[2016-07-14T11:25:14Z] Creating Resource Pool VCH01
INFO[2016-07-14T11:25:14Z] Creating directory [isilion-nfs-01] docker-vols
INFO[2016-07-14T11:25:14Z] Datastore path is [isilion-nfs-01] docker-vols
INFO[2016-07-14T11:25:14Z] Creating appliance on target
INFO[2016-07-14T11:25:14Z] Network role client is sharing NIC with external
INFO[2016-07-14T11:25:14Z] Network role management is sharing NIC with external
INFO[2016-07-14T11:25:15Z] Uploading images for container
INFO[2016-07-14T11:25:15Z]      bootstrap.iso
INFO[2016-07-14T11:25:15Z]      appliance.iso
INFO[2016-07-14T11:25:20Z] Registering VCH as a vSphere extension
INFO[2016-07-14T11:25:26Z] Waiting for IP information
INFO[2016-07-14T11:25:52Z] Waiting for major appliance components to launch
INFO[2016-07-14T11:25:52Z] Initialization of appliance successful
INFO[2016-07-14T11:25:52Z]
INFO[2016-07-14T11:25:52Z] Log server:
INFO[2016-07-14T11:25:52Z] https://10.27.51.42:2378
INFO[2016-07-14T11:25:52Z]
INFO[2016-07-14T11:25:52Z] DOCKER_HOST=10.27.51.42:2376
INFO[2016-07-14T11:25:52Z]
INFO[2016-07-14T11:25:52Z] Connect to docker:
INFO[2016-07-14T11:25:52Z] docker -H 10.27.51.42:2376 --tls info
INFO[2016-07-14T11:25:52Z] Installer completed successfully
root@photon [ /workspace/vic ]#

Now that my VCH is deployed and my docker endpoint is available, we can use the docker command to create volumes and attach them to containers.

First thing to note – the “docker volume ls” and the “docker volume inspect” commands are not yet implemented. So we do not yet have a good way of examining the storage consumption and layout through the docker API. This is work in progress however. That aside, we can still create and consume volumes. Here is how to do that.

root@photon [ /workspace/vic ]#  docker -H 10.27.51.42:2376 --tls volume create \
--name=demo --opt Capacity=1024
demo
root@photon [ /workspace/vic ]#
Notice that I did not specify which “volume store” to use; it simply defaulted to “default” which is the docker-vols folder on my isilion-nfs-01 datastore. Let’s now take a look and see what got created from a vSphere perspective. If I select the datastore in the inventory, then view the files, I see a <volume-name>.VMDK was created in the folder docker-vols/VIC/volumes/<volume-name>:

files viewSo one thing to point out here – the Capacity=1024 option in the docker volume create command is a representation of 1MB blocks. So what was created is a 1GB VMDK. My understanding is that we will add additional granularity to this capacity option going forward.

Now to create a container to consume this volume. Let’s start with an Ubuntu image:

root@photon [ /workspace/vic ]# docker -H 10.27.51.42:2376 --tls run \
-v demo:/demo -it ubuntu /bin/bash
root@3ef0b682bc8d:/#

root@3ef0b682bc8d:/# mount | grep demo
/dev/sdb on /demo type ext4 (rw,noatime,data=ordered)

root@3ef0b682bc8d:/# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 965M 0 965M 0% /dev
tmpfs 1003M 0 1003M 0% /dev/shm
tmpfs 1003M 136K 1003M 1% /run
tmpfs 1003M 0 1003M 0% /sys/fs/cgroup
/dev/sda 7.8G 154M 7.2G 3% /
tmpfs 128M 43M 86M 34% /.tether
tmpfs 1.0M 0 1.0M 0% /.tether-init
rootfs 965M 0 965M 0% /lib/modules
/dev/disk/by-label/fe01ce2a7fbac8fa 976M 1.3M 908M 1% /demo
root@751ecc91c355:/#
Let’s now create a file in the volume in question, and make sure the data is persistent:

root@3ef0b682bc8d:/# cd /demo
root@3ef0b682bc8d:/demo# echo "important" >> need-to-persist.txt
root@3ef0b682bc8d:/demo# cat need-to-persist.txt
important
root@3ef0b682bc8d:/demo# cd ..
root@3ef0b682bc8d:/# exit
exit
Now launch a new container (a simple busybox image) with the same volume, and ensure that the data that we created is still accessible and persistent.

root@photon [ /workspace/vic ]#  docker -H 10.27.51.42:2376 --tls run \
-v demo:/demo -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
a3ed95caeb02: Pull complete
8ddc19f16526: Pull complete
Digest: sha256:65ce39ce3eb0997074a460adfb568d0b9f0f6a4392d97b6035630c9d7bf92402
Status: Downloaded newer image for library/busybox:latest

/ # df
Filesystem           1K-blocks      Used Available Use% Mounted on
devtmpfs                987544         0    987544   0% /dev
tmpfs                  1026584         0   1026584   0% /dev/shm
tmpfs                  1026584       132   1026452   0% /run
tmpfs                  1026584         0   1026584   0% /sys/fs/cgroup
/dev/sda               8125880     19612   7670456   0% /
tmpfs                   131072     43940     87132  34% /.tether
tmpfs                     1024         0      1024   0% /.tether-init
/dev/disk/by-label/fe01ce2a7fbac8fa
                        999320      1288    929220   0% /demo
/ # cd /demo
/demo # ls
lost+found           need-to-persist.txt
/demo # cat need-to-persist.txt
important
/demo #

There you have it – docker volumes in VIC. One final note is what happens when the VCH is deleted. If the VCH is deleted, then the docker volumes associated with that VCH are also deleted (although I also believe that we will change this behavior in future versions):

root@photon [ /workspace/vic ]# ./vic-machine-linux delete -t 
'administrator@vsphere.local:zzzzz@10.27.51.103' \
--compute-resource Mgmt --name VCH01
INFO[2016-07-14T14:36:26Z] ### Removing VCH ####
INFO[2016-07-14T14:36:26Z] Removing VMs
INFO[2016-07-14T14:36:29Z] Removing images
INFO[2016-07-14T14:36:29Z] Removing volumes
INFO[2016-07-14T14:36:29Z] Removing appliance VM network devices
INFO[2016-07-14T14:36:30Z] Removing VCH vSphere extension
INFO[2016-07-14T14:36:35Z] Removing Resource Pool VCH01
INFO[2016-07-14T14:36:35Z] Completed successfully
root@photon-NaTv5i8IA [ /workspace/vic ]#

So for now, be careful if you place data in a volume, and then remove the VCH as this will also remove the volumes.

OK – one final test. Let’s assume that you did not use the “default” label, or that you had multiple volume store specified in the command line (which is perfectly acceptable). How then would you select the correct datastore for the volume? Let’s deploy a new VCH, and this time we will set the label to NFS:

root@photon [ /workspace/vic ]#  ./vic-machine-linux create \
--bridge-network Bridge-DPG --image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:VMware123!@10.27.51.103' \
 --compute-resource Mgmt --external-network VMNW51 --name VCH01 \
--volume-store "NFS:isilion-nfs-01/docker-vols"
INFO[2016-07-14T14:43:04Z] ### Installing VCH ####
INFO[2016-07-14T14:43:04Z] Generating certificate/key pair - private key in ./VCH01-key.pem
INFO[2016-07-14T14:43:05Z] Validating supplied configuration
INFO[2016-07-14T14:43:05Z] Firewall status: DISABLED on /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T14:43:05Z] Firewall configuration OK on hosts:
INFO[2016-07-14T14:43:05Z]   /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T14:43:05Z] License check OK on hosts:
INFO[2016-07-14T14:43:05Z]   /CNA-DC/host/Mgmt/10.27.51.8
INFO[2016-07-14T14:43:05Z] DRS check OK on:
INFO[2016-07-14T14:43:05Z]   /CNA-DC/host/Mgmt/Resources
INFO[2016-07-14T14:43:06Z] Creating Resource Pool VCH01
INFO[2016-07-14T14:43:06Z] Datastore path is [isilion-nfs-01] docker-vols
INFO[2016-07-14T14:43:06Z] Creating appliance on target
INFO[2016-07-14T14:43:06Z] Network role management is sharing NIC with client
INFO[2016-07-14T14:43:06Z] Network role external is sharing NIC with client
INFO[2016-07-14T14:43:08Z] Uploading images for container
INFO[2016-07-14T14:43:08Z]      bootstrap.iso
INFO[2016-07-14T14:43:08Z]      appliance.iso
INFO[2016-07-14T14:43:13Z] Registering VCH as a vSphere extension
INFO[2016-07-14T14:43:18Z] Waiting for IP information
INFO[2016-07-14T14:43:44Z] Waiting for major appliance components to launch
INFO[2016-07-14T14:43:44Z] Initialization of appliance successful
INFO[2016-07-14T14:43:44Z]
INFO[2016-07-14T14:43:44Z] Log server:
INFO[2016-07-14T14:43:44Z] https://10.27.51.47:2378
INFO[2016-07-14T14:43:44Z]
INFO[2016-07-14T14:43:44Z] DOCKER_HOST=10.27.51.47:2376
INFO[2016-07-14T14:43:44Z]
INFO[2016-07-14T14:43:44Z] Connect to docker:
INFO[2016-07-14T14:43:44Z] docker -H 10.27.51.47:2376 --tls info
INFO[2016-07-14T14:43:44Z] Installer completed successfully

Now we want to create a volume, but place it in the folder/datastore identified by the label NFS. First, let’s try to create a volume as before:

root@photon [ /workspace/vic ]# docker -H 10.27.51.47:2376 --tls --tls \
volume create --name=demo --opt Capacity=1024
Error response from daemon: Server error from Portlayer: [POST /storage/volumes/][500]\
 createVolumeInternalServerError

Yes – we know it is a horrible error message, and we will fix that. But to make it work, you now need another option to docker volume called VolumeStore. Here it is.

root@photon [ /workspace/vic ]# docker -H 10.27.51.47:2376 --tls \
 volume create --name=demo --opt Capacity=1024 --opt VolumeStore=NFS
demo
root@photon [ /workspace/vic ]#

Now you can consume the volume in the same way as it was shown in the previous example.

Caution: A number of commands shown here will definitely change in future releases of VIC. However, what I have shown you is how to get started with docker volumes in VIC v0.4.0. If you do run into some anomalies that are not described in the post, and you feel it is a mismatch in behavior with standard docker, please let me know. I will feed this back to our engineering team, who are always open to suggestions on how to make VIC as seamless as possible to standard docker behavior.

The post Container Volumes in VIC v0.4.0 appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/15/container-volumes-vic-v0-4-0/feed/ 0
Getting Started with vSphere Integrated Containers v0.4.0 http://cormachogan.com/2016/07/14/getting-started-vsphere-integrated-containers-v0-4-0/ http://cormachogan.com/2016/07/14/getting-started-vsphere-integrated-containers-v0-4-0/#comments Thu, 14 Jul 2016 13:00:55 +0000 http://cormachogan.com/?p=6955 I’ve been working very closely with our vSphere Integrated Container (VIC) team here at VMware recently, and am delighted to say that v0.4.0 is now available for download from GitHub. Of course, this is still not supported in production, and Continue reading

The post Getting Started with vSphere Integrated Containers v0.4.0 appeared first on CormacHogan.com.

]]>
I’ve been working very closely with our vSphere Integrated Container (VIC) team here at VMware recently, and am delighted to say that v0.4.0 is now available for download from GitHub. Of course, this is still not supported in production, and is still in tech preview. However for those of you interested, it gives you an opportunity to try it out and see the significant progress made by the team over the last couple of months. You can download it from bintray. This version of VIC is bringing us closer and closer to the original functionality of “Project Bonneville” for running containers as VMs (not in VMs) on vSphere. The docker API endpoint now provides almost identical functionality to running docker anywhere else, although there is still a little bit of work to do. Let’s take a closer look.

What is VIC?

VIC allows customers to run “containers as VMs” in the vSphere infrastructure, rather than “containers in a VM”. It can be deployed directly to a standalone ESXi host, or it can be deployed to vCenter Server. This has some advantages over the “container in a VM” approach which I highlighted here in my post which compared and contrasted VIC with Photon Controller.

VCH Deployment

Simply pull down the zipped archive from bintray, and extract it. I have downloaded it to a folder called /workspace on my Photon OS VM.

root@photon [ /workspace ]# tar zxvf vic_0.4.0.tar.gz
vic/
vic/bootstrap.iso
vic/vic-machine-darwin
vic/appliance.iso
vic/README
vic/LICENSE
vic/vic-machine-windows.exe
vic/vic-machine-linux

As you can see, there is a vic-machine command for Linux, Windows and Darwin (Fusion). Let’s see what the options are for building the VCH – Virtual Container Host.

The “appliance.iso” is used to deploy the VCH, and the “bootstrap.iso” is used for a minimal Linux image to bootstrap the containers before overlaying them with the chosen image. More on this shortly.

root@photon [ /workspace/vic ]# ./vic-machine-linux
NAME:
 vic-machine-linux - Create and manage Virtual Container Hosts

USAGE:
 vic-machine-linux [global options] command [command options] [arguments...]

VERSION:
 2868-0fcaa7e27730c2b4d8d807f3de19c53670b94477

COMMANDS:
 create Deploy VCH
 delete Delete VCH and associated resources
 inspect Inspect VCH
 version Show VIC version information

GLOBAL OPTIONS:
 --help, -h show help
 --version, -v print the version

And to get more info about the “create” option, do the following:

root@photon [ /workspace/vic ]# ./vic-machine-linux create -h

I won’t display the output here. You can see it for yourself when you run the command. Further details on deployment can also be found here in the official docs. In the following create example, I am going to do the following:

  • Deploy VCH to a vCenter Server at 10.27.51.103
  • I used administrator@vsphere.local as the user, with a password of zzzzzzz
  • Use the cluster called Mgmt as the destination Resource Pool for VCH
  • Create a resource pool and a VCH (Container Host) with the name VCH01
  • The external network (where images will be pulled from by VCH01) is VMNW51
  • The bridge network to allow inter-container communication is a distributed port group called Bridge-DPG
  • The datastore where container images are to be stored is isilion-nfs-01
  • Persistent container volumes will be stored in the folder VIC on isilion-nfs-01 and will be labeled corvols.

Here is the command, and output:

root@photon [ /workspace/vic ]# ./vic-machine-linux create  --bridge-network \
Bridge-DPG  --image-datastore isilion-nfs-01 \
-t 'administrator@vsphere.local:zzzzzzz@10.27.51.103'  \ 
--compute-resource Mgmt --external-network VMNW51 --name VCH01 \ 
--volume-store "corvols:isilion-nfs-01/VIC" 
INFO[2016-07-14T08:03:02Z] ### Installing VCH #### 
INFO[2016-07-14T08:03:02Z] Generating certificate/key pair - private key in ./VCH01-key.pem 
INFO[2016-07-14T08:03:03Z] Validating supplied configuration 
INFO[2016-07-14T08:03:03Z] Firewall status: DISABLED on /CNA-DC/host/Mgmt/10.27.51.8 
INFO[2016-07-14T08:03:03Z] Firewall configuration OK on hosts: 
INFO[2016-07-14T08:03:03Z]   /CNA-DC/host/Mgmt/10.27.51.8 
INFO[2016-07-14T08:03:04Z] License check OK on hosts: 
INFO[2016-07-14T08:03:04Z]   /CNA-DC/host/Mgmt/10.27.51.8 
INFO[2016-07-14T08:03:04Z] DRS check OK on: 
INFO[2016-07-14T08:03:04Z]   /CNA-DC/host/Mgmt/Resources 
INFO[2016-07-14T08:03:04Z] Creating Resource Pool VCH01 
INFO[2016-07-14T08:03:04Z] Datastore path is [isilion-nfs-01] VIC 
INFO[2016-07-14T08:03:04Z] Creating appliance on target 
INFO[2016-07-14T08:03:04Z] Network role client is sharing NIC with external 
INFO[2016-07-14T08:03:04Z] Network role management is sharing NIC with external 
INFO[2016-07-14T08:03:05Z] Uploading images for container 
INFO[2016-07-14T08:03:05Z]      bootstrap.iso 
INFO[2016-07-14T08:03:05Z]      appliance.iso 
INFO[2016-07-14T08:03:10Z] Registering VCH as a vSphere extension 
INFO[2016-07-14T08:03:16Z] Waiting for IP information 
INFO[2016-07-14T08:03:40Z] Waiting for major appliance components to launch 
INFO[2016-07-14T08:03:40Z] Initialization of appliance successful 
INFO[2016-07-14T08:03:40Z] 
INFO[2016-07-14T08:03:40Z] Log server: 
INFO[2016-07-14T08:03:40Z] https://10.27.51.40:2378 
INFO[2016-07-14T08:03:40Z] 
INFO[2016-07-14T08:03:40Z] DOCKER_HOST=10.27.51.40:2376 
INFO[2016-07-14T08:03:40Z] 
INFO[2016-07-14T08:03:40Z] Connect to docker: 
INFO[2016-07-14T08:03:40Z] docker -H 10.27.51.40:2376 --tls info 
INFO[2016-07-14T08:03:40Z] Installer completed successfully 
root@photon [ /workspace/vic ]#

From the last pieces of output, I have the necessary docker API endpoint to allow me to begin creating containers. Let’s look at what has taken place in vCenter at this point. First, we can see the new VCH resource pool and appliance:

VCH resourcesAnd next if we examine the virtual hardware of the VCH, we can see how the appliance.iso is utilitized, along with the fact that the VCH has access to the external network (VMNW51) for downloading images from docker repos, and access to the container/bridge network:

VCH hardwareDocker Containers

dockerOK – so everything is now in place for us to start creating “containers as VMs” using standard docker commands against the docker endpoint provided by the VCH. Let’s begin with some basic docker query commands such as “info” and “ps”. These can be revisited at any point to get additional details about the state of the containers and images that have been deployed in your vSphere environment. Let’s first display the “info” output immediately followed by the “ps” output.

root@photon [ /workspace/vic ]#  docker -H 10.27.51.40:2376 --tls info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Storage Driver: vSphere Integrated Containers Backend Engine
vSphere Integrated Containers Backend Engine: RUNNING
Execution Driver: vSphere Integrated Containers Backend Engine
Plugins:
 Volume: ds://://@isilion-nfs-01/%5Bisilion-nfs-01%5D%20VIC
 Network: bridge
Kernel Version: 4.4.8-esx
Operating System: VMware Photon/Linux
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.958 GiB
Name: VCH01
ID: vSphere Integrated Containers
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
Registry: registry-1.docker.io
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: IPv4 forwarding is disabled
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
root@photon-NaTv5i8IA [ /workspace/vic ]#

root@photon [ /workspace/vic ]#  docker -H 10.27.51.40:2376 --tls ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS\
              PORTS               NAMES
root@photon [ /workspace/vic ]#

So not a lot going on at the moment. Let’s deploy our very first (simple) container – busybox:

root@photon [ /workspace/vic ]# docker -H 10.27.51.40:2376 --tls run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
a3ed95caeb02: Pull complete
8ddc19f16526: Pull complete
Digest: sha256:65ce39ce3eb0997074a460adfb568d0b9f0f6a4392d97b6035630c9d7bf92402
Status: Downloaded newer image for library/busybox:latest
/ # ls
bin etc lib mnt root sbin tmp var
dev home lost+found proc run sys usr
/ # ls /etc
group hostname hosts localtime passwd resolv.conf shadow
/ #

This has dropped me into a shell on the image “busybox”. This is a bit of a simple image, but what it has confirmed is that the VCH was able to pull images from docker, and it has successfully launched a “container as a VM” also.

Congratulations! You have deployed your first container “as a VM”.

Let’s now go back to vCenter, and examine things from there. The first thing we notice is that in the VCH resource pool, we have our new container in the inventory:

containerAnd now if we examine the virtual hardware of that container, we can find the location of the image on the image datastore, the fact that it is connected to the container/bridge network, and that the CD is connected to the “bootstrap.iso” image that we saw in the VCH folder on initial deployment.

container vHWAnd now if I return to the photon OS CLI (open a new shell), then I can run additional docker commands such as “ps” to examine the state:

root@photon [ /workspace ]#  docker -H 10.27.51.40:2376 --tls ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS \
             PORTS               NAMES
045e56ad498c        busybox             "sh"                20 minutes ago      Running\
                                 ecstatic_meninsky
root@photon [ /workspace ]# 

And we can see our running container. Now there are a lot of other things that we can do, but this is hopefully enough to get you started with v0.4.0.

Removing VCH

To tidy up, you can follow this procedure. First stop and remove the containers, then remove the VCH:

root@photon [ /workspace/vic ]# docker -H 10.27.51.40:2376 --tls stop 045e56ad498c
045e56ad498c
root@photon [ /workspace/vic ]# docker -H 10.27.51.40:2376 --tls rm 045e56ad498c
045e56ad498c
root@photon [ /workspace/vic ]# docker -H 10.27.51.40:2376 --tls ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@photon [ /workspace/vic ]# ./vic-machine-linux delete \
-t 'administrator@vsphere.local:VMware123!@10.27.51.103' \
--compute-resource Mgmt --name VCH01
INFO[2016-07-14T09:20:55Z] ### Removing VCH ####
INFO[2016-07-14T09:20:55Z] Removing VMs
INFO[2016-07-14T09:20:55Z] Removing images
INFO[2016-07-14T09:20:55Z] Removing volumes
INFO[2016-07-14T09:20:56Z] Removing appliance VM network devices
INFO[2016-07-14T09:20:58Z] Removing VCH vSphere extension
INFO[2016-07-14T09:21:02Z] Removing Resource Pool VCH01
INFO[2016-07-14T09:21:02Z] Completed successfully
root@photon [ /workspace/vic ]#

For more details on using vSphere Integrated Containers v0.4.0 see the user guide on github here and command usage guide on github here.

And if you are coming to VMworld 2016, you should definitely check out the various sessions, labs and demos on Cloud Native Apps (CNA).

The post Getting Started with vSphere Integrated Containers v0.4.0 appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/14/getting-started-vsphere-integrated-containers-v0-4-0/feed/ 9
Thank you – Top vBlog 2016 – #3 http://cormachogan.com/2016/07/04/thank-top-vblog-3/ http://cormachogan.com/2016/07/04/thank-top-vblog-3/#comments Mon, 04 Jul 2016 09:30:59 +0000 http://cormachogan.com/?p=6940 A Cháirde, I would like to say a quick thank you for once again voting for my blog in the annual vBlog ballot. It is very humbling that so many of you voted for my blog. Once again I came Continue reading

The post Thank you – Top vBlog 2016 – #3 appeared first on CormacHogan.com.

]]>
A Cháirde,

I would like to say a quick thank you for once again voting for my blog in the annual vBlog ballot. It is very humbling that so many of you voted for my blog. Once again I came in at position #3, surrounded by such luminaries as Duncan Epping, William Lam, Frank Denneman and Chris Wahl. And to top it off, I also came in as #1 in the Best Storage Blog category. To say I’m thrilled is an understatement – so thank you.

A special word of thanks also for Eric Siebert of vsphere-land.com for organizing all of this once more. No mean feat. Thank you Eric.

thank-you-1400x800-c-default

The post Thank you – Top vBlog 2016 – #3 appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/07/04/thank-top-vblog-3/feed/ 1
Deploy Docker Swarm using docker-machine with Consul on Photon Controller http://cormachogan.com/2016/06/29/deploy-docker-swarm-consul-photon-controller/ http://cormachogan.com/2016/06/29/deploy-docker-swarm-consul-photon-controller/#respond Wed, 29 Jun 2016 13:00:53 +0000 http://cormachogan.com/?p=6910 In this post I will now show you the steps involved in creating a Docker Swarm configuration using docker-machine with Photon Controller driver plugin. In previous posts, I showed how you can setup Photon OS to deploy Photon Controller and Continue reading

The post Deploy Docker Swarm using docker-machine with Consul on Photon Controller appeared first on CormacHogan.com.

]]>
docker-swarmIn this post I will now show you the steps involved in creating a Docker Swarm configuration using docker-machine with Photon Controller driver plugin. In previous posts, I showed how you can setup Photon OS to deploy Photon Controller and I also showed you how to build docker-machine for Photon Controller. Note that there are a lot of ways to deploy Swarm. Since I was given a demonstration on doing this using “Consul” for cluster membership and discovery, that is the mechanism that I am going to use here. Now, a couple of weeks back, we looked at deploying Docker Swarm using the “cluster” mechanism also available in Photon Controller. This mechanism used “etcd” for discovery, configuration, and so on. In this example, we are going to deploy Docker Swarm from the ground up, step-by-step, using the docker-machine with photon controller driver, but in this example we are going to use “Consul” which does something very similar to “etcd”.

*** Please note that at the time of writing, Photon Controller is still not GA ***

The steps to deploy Docker Swarm with docker machine on Photon Controller can be outlined as follows:

  1. Deploy Photon Controller (link above)
  2. Build the docker-machine driver for Photon Controller (link above)
  3. Setup the necessary PHOTON environment variables in the environment where you will be deploying Swarm
  4. Deploy Consul machine and Consul tool
  5. Deploy a Docker Swarm master
  6. Deploy one or more Docker Swarm slaves (we provision two)
  7. Deploy your containers

Now because we wish to use the Photon Controller for the underlying framework, we need to ensure that we are using the photon driver for the docker-machines (step 2 above), and that we have the environment variables for PHOTON also in place (step 3 above). I am running this deployment from an Ubuntu 16.04 VM. Here is an example of the environment variables taken from my setup:

PHOTON_DISK_FLAVOR=DOCKERDISKFLAVOR
PHOTON_ISO_PATH=/home/cormac/docker-machine/cloud-init.iso
PHOTON_SSH_USER_PASSWORD=tcuser
PHOTON_VM_FLAVOR=DOCKERFLAVOR
PHOTON_SSH_KEYPATH=/home/cormac/.ssh/id_rsa
PHOTON_PROJECT=0e0de526-06ad-4b60-9d15-a021d68566fe
PHOTON_ENDPOINT=http://10.27.44.34
PHOTON_IMAGE=051ba0d7-2560-4533-b90c-77caa4cd6fb0

Once those are in place, the docker machines can now be deployed. Now you could do this manually, one docker-machine at a time. However my good pal Massimo provided me with the script that he created when this demo was run at DockerCon ’16 recently. Here is the script. Note that the driver option to docker-machine is “photon”.

#!/bin/bash

DRIVER="photon"
NUMBEROFNODES=3
echo
echo "*** Step 1 - deploy the Consul machine"
echo
docker-machine create -d ${DRIVER} consul-machine

echo
echo "*** Step 2 - run the Consul tool on the Consul machine"
echo
docker $(docker-machine config consul-machine) run -d -p "8500:8500" -h "consul" \
progrium/consul -server -bootstrap

echo
echo "*** Step 3 - Create the Docker Swarm master node"
echo
docker-machine create -d ${DRIVER} --swarm --swarm-master \
  --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \
  --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \
  --engine-opt="cluster-advertise=eth0:2376"\
  swarm-node-1-master

echo
echo "*** Step 4 - Deploy 2  Docker Swarm slave nodes"
echo
i=2

while [[ ${i} -le ${NUMBEROFNODES} ]]
do
    docker-machine create -d ${DRIVER} --swarm \
      --swarm-discovery="consul://$(docker-machine ip consul-machine):8500" \
      --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \
      --engine-opt="cluster-advertise=eth0:2376"\
      swarm-node-${i}
    ((i=i+1))
done

echo
echo "*** Step 5 - Display swarm info"
echo
docker-machine env --swarm swarm-node-1-master

And here is an example output from running the script. This is the start of the script where we deploy “Consul”. Here you can see the VM being created with the initial cloud-init ISO image, the VM network details being discovered and then the OS image being attached to the VM (in this case it is Debian). You then see the certs being moved around locally and copied remotely to give us SSH access to the machines. Finally you see that docker is up and running. In the second step, you can see that “Consul” is launched as a container on that docker-machine.

cormac@cs-dhcp32-29:~/docker-machine-scripts$ ./deploy-swarm.sh

*** Step 1 - deploy the Consul machine

Running pre-create checks...
Creating machine...
(consul-machine) VM was created with Id:  7086eecb-a23f-48e0-87a8-13be5f5222f1
(consul-machine) ISO is attached to VM.
(consul-machine) VM is started.
(consul-machine) VM IP:  10.27.33.112
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with debian...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this \
virtual machine, run: docker-machine env consul-machine

*** Step 2 - run the Consul tool on the Consul machine

Unable to find image 'progrium/consul:latest' locally
latest: Pulling from progrium/consul
c862d82a67a2: Pull complete
0e7f3c08384e: Pull complete
0e221e32327a: Pull complete
09a952464e47: Pull complete
60a1b927414d: Pull complete
4c9f46b5ccce: Pull complete
417d86672aa4: Pull complete
b0d47ad24447: Pull complete
fd5300bd53f0: Pull complete
a3ed95caeb02: Pull complete
d023b445076e: Pull complete
ba8851f89e33: Pull complete
5d1cefca2a28: Pull complete
Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274
Status: Downloaded newer image for progrium/consul:latest
2ade0f6a921dc208e2cb4fc216278679d3282ca96f4a1508ffdbe95da8760439

Now we come to the section that is specific to Docker Swarm. Many of the steps are similar to what you will see above, but once the OS image is in place, we see the Swarm cluster getting initialized. First we have the master:

*** Step 3 - Create the Docker Swarm master node

Running pre-create checks...
Creating machine...
(swarm-node-1-master) VM was created with Id:  27e28089-6e39-4450-ba37-cde388f427c2
(swarm-node-1-master) ISO is attached to VM.
(swarm-node-1-master) VM is started.
(swarm-node-1-master) VM IP:  10.27.32.103
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with debian...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this \
virtual machine, run: docker-machine env swarm-node-1-master

Then we have the two Swarm slaves being deployed:


*** Step 4 - Deploy 2  Docker Swarm slave nodes

Running pre-create checks...
Creating machine...
(swarm-node-2) VM was created with Id:  e44cc8a4-ca90-4644-9abc-a84311ec603b
(swarm-node-2) ISO is attached to VM.
(swarm-node-2) VM is started.
(swarm-node-2) VM IP:  10.27.33.114
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with debian...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this \
virtual machine, run: docker-machine env swarm-node-2
.
.

If you wish to do deploy a slave manually, you would simply run the command below. This is deploying one of the slave nodes by hand. You can use this to add additional slaves to the cluster later on.

cormac@cs-dhcp32-29:~/docker-machine-scripts$  docker-machine create -d photon \
--swarm --swarm-discovery="consul://$(docker-machine ip consul-machine):8500"  \
--engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500" \
--engine-opt="cluster-advertise=eth0:2376" swarm-node-3
Running pre-create checks...
Creating machine...
(swarm-node-3) VM was created with Id:  2744e118-a16a-43ba-857a-472d87502b85
(swarm-node-3) ISO is attached to VM.
(swarm-node-3) VM is started.
(swarm-node-3) VM IP:  10.27.33.118
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with debian...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this \
virtual machine, run: docker-machine env swarm-node-3
cormac@cs-dhcp32-29:~/docker-machine-scripts$

Now both the slaves, and the master have been deployed. The final steps just gives info about the Swarm environment.

*** Step 5 - Display swarm info

export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://10.27.32.103:3376"
export DOCKER_CERT_PATH="/home/cormac/.docker/machine/machines/swarm-node-1-master"
export DOCKER_MACHINE_NAME="swarm-node-1-master"
# Run this command to configure your shell:
# eval $(docker-machine env --swarm swarm-node-1-master)

to show all of the docker-machines, run docker-machine ls:

cormac@cs-dhcp32-29:/etc$ docker-machine ls
NAME                  ACTIVE      DRIVER   STATE     URL                       \
SWARM                          DOCKER    ERRORS
consul-machine        -           photon   Running   tcp://10.27.33.112:2376   \
                               v1.11.2
swarm-node-1-master   * (swarm)   photon   Running   tcp://10.27.32.103:2376   \
swarm-node-1-master (master)   v1.11.2
swarm-node-2          -           photon   Running   tcp://10.27.33.114:2376   \
swarm-node-1-master            v1.11.2
swarm-node-3          -           photon   Running   tcp://10.27.33.118:2376   \
swarm-node-1-master            v1.11.2
cormac@cs-dhcp32-29:/etc$

This displays the machine running the “Consul” container, as well as the master node and two slave nodes in my Swarm cluster. Now we can examine the cluster setup in more detail with docker info, after we run the eval command highlighted in the output above to configure our shell:

cormac@cs-dhcp32-29:~/docker-machine-scripts$ eval $(docker-machine env \
--swarm swarm-node-1-master)
cormac@cs-dhcp32-29:~/docker-machine-scripts$ docker info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 3
Server Version: swarm/1.2.3
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 3
 swarm-node-1-master: 10.27.32.103:2376
  └ ID: O5ZJ:RFDJ:RXUY:CQV6:2TDL:3ACI:DWCP:5X7A:MKCP:HUAP:4TUD:FE4P
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.061 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \
operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs
  └ UpdatedAt: 2016-06-27T15:39:51Z
  └ ServerVersion: 1.11.2
 swarm-node-2: 10.27.33.114:2376
  └ ID: MGRK:45KO:LATQ:DLCZ:ITFX:PSQC:6P4V:ZQYS:NZ35:SLSK:CDYH:5ZME
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.061 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \
operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs
  └ UpdatedAt: 2016-06-27T15:39:42Z
  └ ServerVersion: 1.11.2
 swarm-node-3: 10.27.33.118:2376
  └ ID: NL4P:YTPC:W464:43TA:PECO:D3M3:6EJG:DQOV:BPLW:CSBA:YUPK:JHSI
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.061 GiB
  └ Labels: executiondriver=, kernelversion=3.16.0-4-amd64, \
operatingsystem=Debian GNU/Linux 8 (jessie), provider=photon, storagedriver=aufs
  └ UpdatedAt: 2016-06-27T15:40:06Z
  └ ServerVersion: 1.11.2
Plugins:
 Volume:
 Network:
Kernel Version: 3.16.0-4-amd64
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 6.184 GiB
Name: 87a4cfa14275

And we can also query the membership in “Consul”. The following command will show the docker master and slave nodes:

cormac@cs-dhcp32-29:~/docker-machine-scripts$ docker run swarm list \
consul://$(docker-machine ip consul-machine):8500
time="2016-06-27T15:43:22Z" level=info msg="Initializing discovery without TLS"
10.27.32.103:2376
10.27.33.114:2376
10.27.33.118:2376

Consul also provides a basic UI. If you point a browser at the docker-machine host running “Consul”, port 8500, this will bring it up. If you navigate to the Key/Value view, click on Docker, then Nodes, the list of members is once again displayed:

Consul UINow you can start to deploy containers on the Swarm cluster, and you should once again see them being placed in a round-robin fashion on the slave machines.

To look at the running containers on each of the nodes in the swarm cluster, you must first select the node you wish to examine:

root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-1-master)
root@cs-dhcp32-29:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED        \
     STATUS              PORTS                              NAMES
6920cf9687c1        swarm:latest        "/swarm join --advert"   2 days ago     \
     Up 2 days           2375/tcp                           swarm-agent
8b2148aeeab8        swarm:latest        "/swarm manage --tlsv"   2 days ago     \
     Up 2 days           2375/tcp, 0.0.0.0:3376->3376/tcp   swarm-agent-master
root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-2)
root@cs-dhcp32-29:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED        \
     STATUS              PORTS               NAMES
90af8db22134        swarm:latest        "/swarm join --advert"   2 days ago     \
     Up 2 days           2375/tcp            swarm-agent
root@cs-dhcp32-29:~# eval $(docker-machine env swarm-node-3)
root@cs-dhcp32-29:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED        \
     STATUS              PORTS               NAMES
9ee781ea717d        swarm:latest        "/swarm join --advert"   2 days ago     \
     Up 2 days           2375/tcp            swarm-agent

To look at all the containers together, set DOCKER_HOST and port to 3376 (slide right for full output):

root@cs-dhcp32-29:~# DOCKER_HOST=$(docker-machine ip swarm-node-1-master):3376
root@cs-dhcp32-29:~# export DOCKER_HOST

root@cs-dhcp32-29:~# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED         \
    STATUS              PORTS                                   NAMES
9ee781ea717d        swarm:latest        "/swarm join --advert"   2 days ago      \
    Up 2 days           2375/tcp                                swarm-node-3/swarm-agent
90af8db22134        swarm:latest        "/swarm join --advert"   2 days ago      \
    Up 2 days           2375/tcp                                swarm-node-2/swarm-agent
6920cf9687c1        swarm:latest        "/swarm join --advert"   2 days ago      \
    Up 2 days           2375/tcp                                swarm-node-1-master/swarm-agent
8b2148aeeab8        swarm:latest        "/swarm manage --tlsv"   2 days ago      \
    Up 2 days           2375/tcp, 10.27.33.169:3376->3376/tcp   swarm-node-1-master/swarm-agent-master
root@cs-dhcp32-29:~#

Next, run some simple containers. I have used the simple “hello-world” one:

root@cs-dhcp32-29:~# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Now examine the containers that have run with “docker ps -a”:

root@cs-dhcp32-29:~# docker ps -a
... NAMES
... swarm-node-3/trusting_allen
... swarm-node-2/evil_mahavira
... swarm-node-3/swarm-agent
... swarm-node-2/swarm-agent
... swarm-node-1-master/swarm-agent
... swarm-node-1-master/swarm-agent-master
root@cs-dhcp32-29:~#

I parsed the output just to show the NAMES column. Here we can see that the 2 x hello-world containers (the first two in the output) have been placed on different swarm slaves. The containers are being balanced across nodes in a round-robin fashion.

My understanding is that there have been a number improvements made around Docker Swarm at DockerCon ’16, including a better load-balancing mechanism. However, for the purposes of this demo, it is still round-robin.

So once again I hope this shows the flexibility of Photon Controller. Yes, you can quickly deploy Docker Swarm using the “canned” cluster format I described previously. But, if you want more granular control or you wish to use different versions or different tooling (e.g. “Consul” instead of “etcd”), then note that you now have the flexibility to deploy a Docker Swarm using docker-machine. Have fun!

The post Deploy Docker Swarm using docker-machine with Consul on Photon Controller appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/06/29/deploy-docker-swarm-consul-photon-controller/feed/ 0
Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers) http://cormachogan.com/2016/06/28/compare-contrast-photon-controller-vs-vic-vsphere-integrated-containers/ http://cormachogan.com/2016/06/28/compare-contrast-photon-controller-vs-vic-vsphere-integrated-containers/#comments Tue, 28 Jun 2016 13:00:29 +0000 http://cormachogan.com/?p=6902 As many regular reader will be aware, I’ve been spending a lot of time recently on VMware’s Cloud Native App solutions. This is due to an internal program available to VMware employees called a Take-3. A Take-3 is where employees Continue reading

The post Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers) appeared first on CormacHogan.com.

]]>
PHOTON_square140As many regular reader will be aware, I’ve been spending a lot of time recently on VMware’s Cloud Native App solutions. This is due to an internal program available to VMware employees called a Take-3. A Take-3 is where employees can take 3 months out of their current role and try a new challenge in another part of the company. Once we launched VSAN 6.2 earlier this year, I thought this would be an opportune time to try something different. Thanks to the support from the management teams in both my Storage and Availability BU (SABU) and the Cloud Native Apps BU (CNABU),  I started my Take-3 at the beginning of May. This is when my CNA articles on VIC (vSphere Integrated Containers) and Photon Controller first started to appear. Only recently I was asked an interesting question – when would I use VIC and when would I use Photon Controller? That is a good question, as both products enable customer to use containers on VMware products and solutions. So let me see if I can provide some guidance, as I asked the same question from some of the guiding lights in the CNABU.

When to use VIC?

Lets talk about VIC first, and why customer might like to deploy container workloads on VIC rather than something like “container in a VM”. Just to recap, VIC allows customers to run “container as a VM” in the vSphere infrastructure, rather than “container in a VM”. It can be deployed directly to a standalone ESXi host, or it can be deployed to vCenter Server. This has some advantages over the “container in a VM” approach.

Reason 1 – Efficiency

Consider an example where you have a VM which runs a docker daemon and launches lots of containers. Customers will now connect to these containers via docker client. Assume that over a period of time, this VM uses up a significant mount (if not all) of its memory for containers and eventually these containers are shutdown. This does not allow the memory consumed by the VM on behalf of the containers go back into a shared pool of memory (on the hypervisor where the VM is run) for other uses. With VIC, since we are deploying containers as VMs and using ESXi/hypervisor memory resource management, we do not have this issue. To think of this another way:- containers are potentially short-lived, whereas the “container host” is long-lived, and as such can end up making very inefficient use of system resources from the perspective of the global pool.

Now there is a big caveat to this, and it is the question of container packing and life-cycle management. If the container host VMs are well-packed with containers, and you also have control over the life-cycle of the container host, then it can still be efficient. If however there is no way to predict container packing on the container host and if over-provisioning is the result, and you also have no control over the life-cycle of the container host, then you typically don’t get very good resource efficiency.

Reason 2 – Muti-tenancy

There is no multi-tenancy in Docker. Therefore if 50 developers all requested a “container” development environment, a vSphere admin would have to deploy 50 Virtual Machines, one per developer. With VIC, we have the concept of a VCH (Virtual Container Host) which controls access to a pool of vSphere resources. A VCH is designed to be single-tenant, just like a Docker endpoint. Both present you with a per-tenant container namespace. However, with VIC, one can create very many VCHs, each with their own pool of resources. These VCH (resource pools), whether built on a single ESX host or vCenter Server, can be assigned to individual developers.

One could consider now that the vSphere admin is doing CAAS – Containers as a Service.

The 50 developers example is as much about efficiency as it is about tenancy – the fact that you can only have one tenant per container host VM will force you down a path of creating a large silo composed of 50 container host VMs. In the case where we’re comparing ESXi with Linux on the same piece of hardware to run container workloads, ESXi has a big advantage in that you can install as many VCHs as you like.

Reason 3 – Reducing friction between vSphere/Infra Admin and developer

On the main goals of VIC was basically not to have the developer worry about networking and security infrastructure with containers. This particular reason is more about how VIC informs and clarifies the boundaries between the vSphere admin and the developer. To put it simply, a container host VM is like a mini-hypervisor. Give that to a developer and they’re then on the hook for patching, network virtualization, storage virtualization, packing etc. within the container host. The container host is then also a “black box” to the infra folks which can lead to mistakes being made. E.g. “Only certain workloads are allowed on this secure network”. The secure network is configured at the VM level. If the VM is a container host, its hard to control or audit the containers that are coming up and down in that VM and which have access to that secure network.

VIC removes any infra concerns from the consumer of a VCH and allows for much more fine-grained control over access to resources. With VIC, each container gets its very own vNIC.A vSphere admin can also monitor resources that are being consumed on a per container basis.

There is one other major differentiator here with regards to the separation of administrator and developer roles which relates to compliance and auditing tools, and a whole list of process and procedures they have to follow as they run their data center. Without VIC developers end up handing over large VMs that are essentially black boxes of “stuff happening” to the infra team. This may include the like of overlay networks between those “black boxes”. It’s likely that most of the existing tools that the infra team use for compliance, auditing, etc will not work.

With VIC there is a cleaner line of demarcation. Since all the containers are run as VMs. and the vSphere admin already has tools setup to take care of true operationalizing of VMs, then they inherit this capability with containers.

Reason 4 – Clustering

Up until very recently, Docker Swarm has been very primitive when compared to vSphere HA and DRS clustering techniques as the Docker Swarm placement algorithm was simply using round-robin. I’ll qualify this by saying that Docker just announced a new Swarm mechanism that uses Raft consensus rather than round-robin at DockerCon ’16. However, there is still no consideration given to resource utilization when doing container placement. VCH, through DRS, has intelligent clustering built-in by its very nature. There are also significant considerations in this area when it comes to rolling upgrades/maintenance mode, etc.

Reason  5 – May not be limited to Linux

Since VIC virtualizes at the hardware layer, any x86 compatible operating system is, in theory, eligible for the VIC container treatment, meaning that it’s not limited to Linux. This has yet to be confirmed however, and we will know more closer to GA.

Reason 6 — Manage both VM based apps and CNA apps in the same infra

This is probably the reason that resonates with folks who are already managing vSphere environments. What do you do when a developer asks you to manage this new, container based app? Do you stand up a new silo just to do this? With VIC, you do not need to. Now you can manage both VMs and containers via the same “single pane of glass”.

When to use Photon Controller?

Let’s now talk about when you might use Photon Controller. Photon Controller allows you to pool a bunch of ESXi hosts and use them for the deployment of VMs with the sole purpose of running containers.

Reason 1 – No vCenter Server

This is probably the primary reason. If your proposed “container” deployment will not include the management of VMs but is only focused on managing containers, then you do not need a vCenter Server. Photon Controller does not need a vCenter Server, only ESXi hosts.  And when we position a VMware container solution on “greenfield” sites, we shouldn’t have to be introducing an additional management framework on top of ESXi such as vCenter. The Photon Controller UI will provide the necessary views into this “container” only environment, albeit containers that run on virtual machines.

Reason 2 – ESXi

ESXi is a world-renowned, reliable, best-in-class hypervisor, with a proven track record. If you want to deploy containers in production, and wish to run them in virtual machines, isn’t ESXi the best choice for such as hypervisor? We hear from many developers that they already use the “free” version of ESXi for developing container applications as it allows them to run various container machines/VMs of differing flavours. Plus it also allows them to run different frameworks (Swarm, Kubernetes, Mesos). It would seem to make sense to have a way to manage and consume our flagship hypervisor product for containers at scale.

Reason 3 – Scale

This brings us nicely to our next reason. Photon Controller is not limited by vSphere constructs, such as cluster (which is currently limited to 64 ESXi hosts). There are no such artificial limits with Photon Controller, and you can have as many ESXi hosts as you like providing resources for your container workloads. We are talking about 100s to 1000s of ESXi hosts here.

Reason 4 – Multi-tenancy Resource Management

For those of you familiar with vCloud Director, Photon Controller has some similar constructs for handling multi-tenancy. We have the concept of tenants, and within tenants there is the concept of resource tickets and projects. This facilitates multi-tenancy for containers, and allows resources to be allocated on a per tenant basis, and then on a per-project basis for each tenant. There is also the concept of flavors, for both compute and disk, where resources allocation and sizing of containers can be managed.

Reason 5 – Quick start with cluster/orchestration frameworks

As many of my blog posts on Photon Controller has shown, you can very quickly stand up frameworks such as Kubernetes, Docker Swarm and Mesos using the Photon Controller construct of “Cluster”. This will allow you to get started very quickly on container based projects. On the flip side, if you are more interested in deploying these frameworks using traditional methods such as “docker-machine” or “kube-up”, these are also supported. Either way, deploying these frameworks is very straight forward and quick.

Conclusion

I hope this clarifies the difference between the VIC and Photon Controller projects that VMware is undertaking. There are of course other projects on-going, such as Photon OS. It seems that understanding the difference between VIC and Photon Controller is not quite intuitive, so hopefully this post helps to clarify this in a few ways. One thing that I do want to highlight is that Photon Controller is not a replacement for vCenter Server. It does not have all of the features or services that we associate with vCenter Server, e.g. SRM for DR, VDP for backup, etc.

Many thanks to Ben Corrie and Mike Hall of the VMware CNA BU for taking some time out and providing me with some of their thoughts and ideas on the main differentiators between the two products.

The post Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers) appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/06/28/compare-contrast-photon-controller-vs-vic-vsphere-integrated-containers/feed/ 6
See you at VMworld 2016 http://cormachogan.com/2016/06/22/see-vmworld-2016/ http://cormachogan.com/2016/06/22/see-vmworld-2016/#comments Wed, 22 Jun 2016 13:00:01 +0000 http://cormachogan.com/?p=6871 I’m thrilled to have had a session accepted at this year’s VMworld. I’m also going to be a co-speaker on another session. As you might have guessed, both presentations are on Virtual SAN (VSAN), and I am co-presenting both sessions Continue reading

The post See you at VMworld 2016 appeared first on CormacHogan.com.

]]>
vmworld2016I’m thrilled to have had a session accepted at this year’s VMworld. I’m also going to be a co-speaker on another session. As you might have guessed, both presentations are on Virtual SAN (VSAN), and I am co-presenting both sessions with my buddy Paudie O’Riordan.

In the first session, we will be talking about how to conduct a successful proof of concept (PoC) on VSAN, which will cover how to prepare, how to test, and what gotchas you need to be aware of when going through a PoC with VSAN. This is session id STO7535 and it is currently scheduled for Wednesday morning (31st August) at 08:30am.

In the other session, which covers day #2 operations, we will cover items like upgrades, troubleshooting, remediation, and monitoring of VSAN, and all those other things that you need to care about when you have VSAN in production. This is session id STO7534 and is scheduled for Tuesday morning (30th August) at 11:00am.

If you have any thoughts on what you would like to see covered during the session, please leave a comment. We’re still putting together the content, and we are wide open to suggestions.

Hope to see you at one of the sessions.

The post See you at VMworld 2016 appeared first on CormacHogan.com.

]]>
http://cormachogan.com/2016/06/22/see-vmworld-2016/feed/ 2