Site icon CormacHogan.com

VMware Fusion v12 – Kubernetes / Kind integration

I recently took a look at the container integration features in VMware Fusion v11.5.6 through the vctl command line feature. I was intrigued to read about a future feature coming in version 12, which included some Kind integration. For those of you unfamiliar with Kind, it is a way of deploying Kubernetes in containers. It might sound a bit strange, but it is actually very powerful, and is used by a lot of developers for many different use-cases. This post is going to look at vctl with this new Kind integration in VMware Fusion version 12. Let’s see how to get it up and running and how to use it.

Let’s start by getting a list of the new vctl commands. I have highlighted the new ones in blue below. One of these is kind which we shall be looking at in more detail shortly.

chogan@chogan-a01 ~ % vctl version
vctl version: 1.1.0
containerd github.com/containerd/containerd v1.3.2-vmw

chogan@chogan-a01 ~ % vctl
vctl - A CLI tool for the container engine powered by VMware Fusion
vctl Highlights:
• Build and run OCI containers.
• Push and pull container images between remote registries & local storage.
• Use a lightweight virtual machine (CRX VM) based on VMware Photon OS to host a container. Use 'vctl system config -h' to learn more.
• Easy shell access into virtual machine that hosts container. See 'vctl execvm’.

USAGE:
  vctl COMMAND [OPTIONS]

COMMANDS:
  build       Build a container image from a Dockerfile.
  create      Create a new container from a container image.
  describe    Show details of a container.
  exec        Execute a command within a running container.
  execvm      Execute a command within a running virtual machine that hosts container.
  help        Help about any command.
  images      List container images.
  inspect     Return low-level information on objects
  kind        Get system environment ready for vctl-based KIND.
  login       Log in to a Docker registry.
  logout      Log out from a remote registry.
  ps          List containers.
  pull        Pull a container image from a registry.
  push        Push a container image to a registry.
  rm          Remove one or more containers.
  rmi         Remove one or more container images.
  run         Run a new container from a container image.
  start       Start an existing container.
  stop        Stop a container.
  system      Manage the container engine.
  tag         Tag container images.
  version     Print the version of vctl.
  volume      Manage volumes.

Run 'vctl COMMAND --help' for more information on a command.

OPTIONS:
  -h, --help   Help for vctl

chogan@chogan-a01 ~ %

First, let’s start the Nautilus Container Engine will launch a very lightweight virtual machine (CRX) based on VMware Photon OS.

chogan@chogan-a01 ~ % vctl system start
Preparing storage...
Container storage has been prepared successfully under /Users/chogan/.vctl/storage
Launching container runtime...
Container runtime has been started.

Now we can initialize the vctl based kind. Note that it pulls down the kubectl and kind binaries, as well as a virtual machine disk file called crx.vmdk.

chogan@chogan-a01 ~ % vctl kind
Downloading 3 files...
Downloading [kubectl 47.38% kind-darwin-amd64 100.00% crx.vmdk 24.98%]
Finished kind-darwin-amd64 100.00%
Downloading [kubectl 99.11% crx.vmdk 62.76%]
Finished kubectl 100.00%
Downloading [crx.vmdk 98.79%]
Finished crx.vmdk 100.00%
3 files successfully downloaded.
vctl-based KIND is ready now. KIND will run local Kubernetes clusters by using vctl containers as "nodes"
* All Docker commands has been aliased to vctl in the current terminal. \
Docker commands performed in current window would be executed through vctl. \
If you need to use regular Docker commands, please use a separate terminal window.

Now let’s see what we can do with the kind. What we really want to do is deploy a Kubernetes cluster using containers:

chogan@chogan-a01 ~ % kind
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build       Build one of [node-image]
  completion  Output shell completion code for the specified shell (bash, zsh or fish)
  create      Creates one of [cluster]
  delete      Deletes one of [cluster]
  export      Exports one of [kubeconfig, logs]
  get         Gets one of [clusters, nodes, kubeconfig]
  help        Help about any command
  load        Loads images into nodes
  version     Prints the kind CLI version

Flags:
  -h, --help              help for kind
      --loglevel string   DEPRECATED: see -v instead
  -q, --quiet             silence all stderr output
  -v, --verbosity int32   info log verbosity
      --version           version for kind

Use "kind [command] --help" for more information about a command.

Let’s create a Kubernetes cluster:

chogan@chogan-a01 ~ % kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊

That looks successful. Let’s run a few queries and verify that it has indeed created.

chogan@chogan-a01 ~ % kind get clusters
kind

chogan@chogan-a01 ~ % kind get nodes
kind-control-plane

chogan@chogan-a01 ~ % kind get kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS<snip>
    server: https://127.0.0.1:63181
  name: kind-kind
contexts:
- context:
    cluster: kind-kind
    user: kind-kind
  name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1<snip>
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUl<snip>

So far, so good. Using the above kubeconfig, I can switch contexts and do some further queries using kubectl. We can check out how many nodes are in the cluster, what Pods and Services have been created.

chogan@chogan-a01 ~ % kubectl cluster-info --context kind-kind
Kubernetes master is running at https://127.0.0.1:63181
KubeDNS is running at https://127.0.0.1:63181/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


chogan@chogan-a01 ~ % kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   18m   v1.18.2


chogan@chogan-a01 ~ % kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  26m
kube-system   kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   26m


chogan@chogan-a01 ~ % kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-66bff467f8-ltv56                     1/1     Running   0          26m
kube-system          coredns-66bff467f8-s7p9f                     1/1     Running   0          26m
kube-system          etcd-kind-control-plane                      1/1     Running   0          26m
kube-system          kindnet-h8dgm                                1/1     Running   0          26m
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          26m
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          26m
kube-system          kube-proxy-mv5cx                             1/1     Running   0          26m
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          26m
local-path-storage   local-path-provisioner-bd4bb6b75-gjpp4       1/1     Running   0          26

At this point, you can go ahead and deploy your own manifest files to create objects in this Kubernetes cluster. Hopefully that has given you a taste of how easy it is to get it up and running.

Let’s now look at the other new vctl commands, inspect and volume. I have 2 containers currently, one for kind (running) and an older one for nginx (stopped). I can use vctl to inspect a container in detail. Information such as label, image, runtime, creation time, etc are all available. I’ve truncated the output below.

chogan@chogan-a01 ~ % vctl ps -a
────                 ─────                                                                                  ───────                   ──               ─────            ──────    ─────────────
NAME                 IMAGE                                                                                  COMMAND                   IP               PORTS            STATUS    CREATION TIME
────                 ─────                                                                                  ───────                   ──               ─────            ──────    ─────────────
kind-control-plane   kindest/node@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f   /usr/local/bin/entry...   172.16.255.128   63181:6443/tcp   running   2020-09-07T10:35:03+01:00
mynewnginx           cormachogan/nginx:latest                                                               /docker-entrypoint.s...   n/a              n/a              stopped   2020-09-03T14:37:54+01:00


chogan@chogan-a01 ~ % vctl inspect kind-control-plane
{
    "ID": "kind-control-plane",
    "Labels": {
        "io.containerd.image.config.stop-signal": "SIGRTMIN+3",
        "io.x-k8s.kind.cluster": "kind",
        "io.x-k8s.kind.role": "control-plane"
    },
    "Image": "index.docker.io/kindest/node@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f",
    "Runtime": {
        "Name": "io.containerd.crx.v2",
        "Options": null
    },
    "SnapshotKey": "kind-control-plane",
    "Snapshotter": "dmg",
    "CreatedAt": "2020-09-07T09:35:03.273552Z",
    "UpdatedAt": "2020-09-07T09:35:03.398965Z",
    "Extensions": null,
    "Spec": {
        "ociVersion": "1.0.1",
        "process": {
            "terminal": true,
            "user": {
                "uid": 0,
                "gid": 0
            },
            "args": [
                "/usr/local/bin/entrypoint",
                "/sbin/init"
            ],
            "env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=docker",
                "TERM=xterm"
            ],
            "cwd": "/",
            "capabilities": {
                "bounding": [
                    "CAP_CHOWN",
                    "CAP_DAC_OVERRIDE",
                    "CAP_DAC_READ_SEARCH",
                    "CAP_FOWNER",
                    "CAP_FSETID”,
<snip>

The vctl volume command can be used for cleaning up (prune) container volumes that are no longer attached to a container i.e. orphaned volumes. I don’t have any orphaned volumes, which is why the command failed below, but hopefully it gives you an idea on how to use.

chogan@chogan-a01 ~ % vctl volume
Manage volumes.
Manage volumes, currently only support volume prune.

USAGE:
  vctl volume COMMAND [OPTIONS]

COMMANDS:
  prune       Remove all unused local volumes.

Run 'vctl volume COMMAND --help' for more information on a command.

OPTIONS:
  -h, --help   Help for volume


chogan@chogan-a01 ~ % vctl volume prune
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N]y
ERROR open /Users/chogan/.vctl/storage/volumes: no such file or directory

Another great update from the VMware Fusion team. It’s going to be super useful for anyone wanting a simple Kubernetes cluster on their laptop/desktop.

Exit mobile version