Getting started with VIC v1.1
VMware recently release vSphere Integrated Containers v1.1. I got an opportunity recently to give it a whirl. While I’ve done quite a bit of work with VIC in the past, a number of things have changed, especially in the command line. What I’ve decided to do in the post is highlight some of the new command line options that are necessary to deploy the VCH, the Virtual Container Host. Once the VCH is deployed, at that point you have the docker API endpoint to start deploying your “containers as VMs”. Before diving into that however, I do want to clarify one point that comes up quite a bit. VIC v1.1 is not using VM fork/instant clone. There are still some limitations to using instant clone, and the VIC team decided not to pursue this option just yet, as they wished to leverage the full set of vSphere core features. Thanks Massimo for the clarification. Now onto deploying my VCH with VIC v1.1.
First things first – VIC now comes as an OVA. Roll it out like any other OVA. Once deployed, you can point a web browser at the OVA and pull down the vic-machine components directly to deploy the VCH(s).
I have gone with deploying the VCH from a Windows environment using vic-machine. If you want to see the steps involved in getting a Windows environment ready for VIC, check out this post here from Cody over at the humble lab. Here is the help output to get us started.
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe -h NAME: vic-machine-windows.exe - Create and manage Virtual Container Hosts USAGE: vic-machine-windows.exe [global options] command [command options] [arguments...] VERSION: v1.1.0-9852-e974a51 COMMANDS: create Deploy VCH delete Delete VCH and associated resources ls List VCHs inspect Inspect VCH upgrade Upgrade VCH to latest version version Show VIC version information debug Debug VCH update Modify configuration help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --help, -h show help --version, -v print the version C:\Users\chogan\Downloads\vic>
Lets see if I can at least validate against my vSphere environment by trying to list any existing VCHs:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \ --user administrator@vsphere.local --password xxx Apr 28 2017 12:38:04.402+01:00 INFO ### Listing VCHs #### Apr 28 2017 12:38:04.491+01:00 ERROR Failed to verify certificate for target=vcsa-06.rainpole.com \ (thumbprint=4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00) Apr 28 2017 12:38:04.494+01:00 ERROR List cannot continue - failed to create validator: x509: \ certificate signed by unknown authority Apr 28 2017 12:38:04.495+01:00 ERROR -------------------- Apr 28 2017 12:38:04.496+01:00 ERROR vic-machine-windows.exe ls failed: list failed
Well, that did not work. I need to include the thumbprint of the vCenter server in the command:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \ --user administrator@vsphere.local --password xxx --thumbprint \ 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 Apr 28 2017 12:39:37.898+01:00 INFO ### Listing VCHs #### Apr 28 2017 12:39:38.109+01:00 INFO Validating target ID PATH NAME VERSION UPGRADE STATUS
Now the command is working, but I don’t have any existing VCHs. Let’s create one. There are a lot of options included in this command since we are providing not only VCH details, but also network details for the “containers as VMs” that we will deploy later on:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \ --user administrator@vsphere.local --password xxxx --name corVCH01 \ --public-network "VM Network" --bridge-network BridgeDPG --bridge-network-range "192.168.100/16" \ --dns-server 10.27.51.252 --tls-cname=*.rainpole.com --no-tlsverify --compute-resource Cluster \ --thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 Apr 28 2017 12:59:31.479+01:00 INFO ### Installing VCH #### Apr 28 2017 12:59:31.481+01:00 WARN Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help) Apr 28 2017 12:59:31.483+01:00 ERROR Common Name must be provided when generating certificates for client authentication: Apr 28 2017 12:59:31.485+01:00 INFO --tls-cname=<FQDN or static IP> # for the appliance VM Apr 28 2017 12:59:31.487+01:00 INFO --tls-cname=<*.yourdomain.com> # if DNS has entries in that form for DHCP addresses (less secure) Apr 28 2017 12:59:31.492+01:00 INFO --no-tlsverify # disables client authentication (anyone can connect to the VCH) Apr 28 2017 12:59:31.493+01:00 INFO --no-tls # disables TLS entirely Apr 28 2017 12:59:31.494+01:00 INFO Apr 28 2017 12:59:31.496+01:00 ERROR Create cannot continue: unable to generate certificates Apr 28 2017 12:59:31.498+01:00 ERROR -------------------- Apr 28 2017 12:59:31.499+01:00 ERROR vic-machine-windows.exe create failed: provide Common Name for server certificate
Unfortunately, it seems it doesn’t like the TLS part of the command. It appears that this is a known issue. It seems that the TLS part of the command should be one of the first options specified in the command line. Let’s move it before some of the other arguments in the command:
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \ --user "administrator@vsphere.local" --password "xxx" --no-tlsverify --name corVCH01 \ --public-network "VM Network" --bridge-network BridgeDPG --bridge-network-range "192.168.100.0/16" \ --dns-server 10.27.51.252 --compute-resource Cluster \ --thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 Apr 28 2017 13:05:45.623+01:00 INFO ### Installing VCH #### Apr 28 2017 13:05:45.625+01:00 WARN Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help) Apr 28 2017 13:05:45.627+01:00 INFO Generating self-signed certificate/key pair - private key in corVCH01\server-key.pem Apr 28 2017 13:05:46.162+01:00 WARN Configuring without TLS verify - certificate-based authentication disabled Apr 28 2017 13:05:46.336+01:00 INFO Validating supplied configuration Apr 28 2017 13:05:46.432+01:00 INFO Suggesting valid values for --image-store based on "*" Apr 28 2017 13:05:46.438+01:00 INFO Suggested values for --image-store: Apr 28 2017 13:05:46.439+01:00 INFO "vsanDatastore (1)" Apr 28 2017 13:05:46.441+01:00 INFO "isilion-nfs-01" Apr 28 2017 13:05:46.463+01:00 INFO vDS configuration OK on "BridgeDPG" Apr 28 2017 13:05:46.464+01:00 ERROR Firewall check SKIPPED Apr 28 2017 13:05:46.466+01:00 ERROR datastore not set Apr 28 2017 13:05:46.467+01:00 ERROR License check SKIPPED Apr 28 2017 13:05:46.468+01:00 ERROR datastore not set Apr 28 2017 13:05:46.469+01:00 ERROR DRS check SKIPPED Apr 28 2017 13:05:46.471+01:00 ERROR datastore not set Apr 28 2017 13:05:46.472+01:00 ERROR Compatibility check SKIPPED Apr 28 2017 13:05:46.473+01:00 ERROR datastore not set Apr 28 2017 13:05:46.475+01:00 ERROR -------------------- Apr 28 2017 13:05:46.476+01:00 ERROR datastore empty Apr 28 2017 13:05:46.477+01:00 ERROR Specified bridge network range is not large enough for the default bridge network size. --bridge-network-range must be /16 or larger network. Apr 28 2017 13:05:46.479+01:00 ERROR Firewall check SKIPPED Apr 28 2017 13:05:46.480+01:00 ERROR License check SKIPPED Apr 28 2017 13:05:46.482+01:00 ERROR DRS check SKIPPED Apr 28 2017 13:05:46.484+01:00 ERROR Compatibility check SKIPPED Apr 28 2017 13:05:46.488+01:00 ERROR Create cannot continue: configuration validation failed Apr 28 2017 13:05:46.490+01:00 ERROR -------------------- Apr 28 2017 13:05:46.491+01:00 ERROR vic-machine-windows.exe create failed: validation of configuration failed
The TLS issue now seems to be addressed, but it appears I omitted a required field, –image-store. This is where the container images will be stored, and it should be set to one of the available datastores in the vSphere environment. The output is even providing some recommended options, either vSAN or an NFS datastore. These are available to all hosts in the cluster.
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --target vcsa-06.rainpole.com \ --user "administrator@vsphere.local" --password "xxx" --no-tlsverify --name corVCH01 \ --image-store isilion-nfs-01 --public-network "VM Network" --bridge-network BridgeDPG \ --bridge-network-range "192.168.100.0/16" --dns-server 10.27.51.252 --compute-resource Cluster \ --thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 Apr 28 2017 13:09:17.732+01:00 INFO ### Installing VCH #### Apr 28 2017 13:09:17.736+01:00 WARN Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help) Apr 28 2017 13:09:17.739+01:00 INFO Loaded server certificate corVCH01\server-cert.pem Apr 28 2017 13:09:17.741+01:00 WARN Configuring without TLS verify - certificate-based authentication disabled Apr 28 2017 13:09:17.914+01:00 INFO Validating supplied configuration Apr 28 2017 13:09:18.027+01:00 INFO vDS configuration OK on "BridgeDPG" Apr 28 2017 13:09:18.053+01:00 INFO Firewall status: DISABLED on "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:09:18.078+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:09:18.101+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:09:18.130+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:09:18.142+01:00 INFO Firewall configuration OK on hosts: Apr 28 2017 13:09:18.144+01:00 INFO "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:09:18.145+01:00 INFO "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:09:18.147+01:00 INFO "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:09:18.149+01:00 INFO "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:09:18.188+01:00 INFO License check OK on hosts: Apr 28 2017 13:09:18.190+01:00 INFO "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:09:18.191+01:00 INFO "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:09:18.192+01:00 INFO "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:09:18.194+01:00 INFO "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:09:18.205+01:00 INFO DRS check OK on: Apr 28 2017 13:09:18.206+01:00 INFO "/DC/host/Cluster" Apr 28 2017 13:09:18.234+01:00 INFO Apr 28 2017 13:09:18.346+01:00 INFO Creating virtual app "corVCH01" Apr 28 2017 13:09:18.369+01:00 INFO Creating appliance on target Apr 28 2017 13:09:18.374+01:00 INFO Network role "client" is sharing NIC with "public" Apr 28 2017 13:09:18.375+01:00 INFO Network role "management" is sharing NIC with "public" Apr 28 2017 13:09:19.301+01:00 INFO Uploading images for container Apr 28 2017 13:09:19.307+01:00 INFO "bootstrap.iso" Apr 28 2017 13:09:19.309+01:00 INFO "appliance.iso" Apr 28 2017 13:09:25.346+01:00 INFO Waiting for IP information Apr 28 2017 13:09:42.869+01:00 INFO Waiting for major appliance components to launch Apr 28 2017 13:09:42.918+01:00 INFO Obtained IP address for client interface: "10.27.51.38" Apr 28 2017 13:09:42.921+01:00 INFO Checking VCH connectivity with vSphere target Apr 28 2017 13:10:42.946+01:00 WARN Could not run VCH vSphere API target check due to ServerFaultCode: A general system error occurred: vix error codes = (3016, 0). but the VCH may still function normally Apr 28 2017 13:12:25.346+01:00 ERROR Connection failed with error: i/o timeout Apr 28 2017 13:12:25.346+01:00 INFO Docker API endpoint check failed: failed to connect to https://10.27.51.38:2376/info: i/o timeout Apr 28 2017 13:12:25.347+01:00 INFO Collecting e1ea92eb-ac80-4b33-88cc-831b35fd8bab vpxd.log Apr 28 2017 13:12:25.418+01:00 INFO API may be slow to start - try to connect to API after a few minutes: Apr 28 2017 13:12:25.428+01:00 INFO Run command: docker -H 10.27.51.38:2376 --tls info Apr 28 2017 13:12:25.429+01:00 INFO If command succeeds, VCH is started. If command fails, VCH failed to install - see documentation for troubleshooting. Apr 28 2017 13:12:25.431+01:00 ERROR -------------------- Apr 28 2017 13:12:25.431+01:00 ERROR vic-machine-windows.exe create failed: Creating VCH exceeded time limit of 3m0s. Please increase the timeout using --timeout to accommodate for a busy vSphere target
I traced this to an issue with DNS. It seems this issue can arise if the VCH cannot resolve some of the vSphere entities (vCenter Server, ESXi). Since I was using DHCP for my VCH, I did not need to specify an IP address, subnet mask or DNS server. However this command includes a DNS server entry. So I simply removed the DNS reference, and ran the command without it (I also include an option to store any volumes created in a particular location highlighted with –volume-store):
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe create --name corVCH01 --compute-resource Cluster \ --target vcsa-06.rainpole.com --user administrator@vsphere.local --password xxx \ --thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00 --no-tlsverify \ --image-store isilion-nfs-01 --public-network "VM Network" --bridge-network BridgeDPG \ --bridge-network-range "192.168.100.0/16" --volume-store "isilion-nfs-01/VIC:corvols" Apr 28 2017 13:46:40.671+01:00 INFO ### Installing VCH #### Apr 28 2017 13:46:40.672+01:00 WARN Using administrative user for VCH operation - use --ops-user to improve security (see -x for advanced help) Apr 28 2017 13:46:40.697+01:00 INFO Loaded server certificate corVCH01\server-cert.pem Apr 28 2017 13:46:40.699+01:00 WARN Configuring without TLS verify - certificate-based authentication disabled Apr 28 2017 13:46:40.873+01:00 INFO Validating supplied configuration Apr 28 2017 13:46:40.991+01:00 INFO vDS configuration OK on "BridgeDPG" Apr 28 2017 13:46:41.018+01:00 INFO Firewall status: DISABLED on "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:46:41.044+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:46:41.071+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:46:41.097+01:00 INFO Firewall status: ENABLED on "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:46:41.109+01:00 INFO Firewall configuration OK on hosts: Apr 28 2017 13:46:41.111+01:00 INFO "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:46:41.112+01:00 INFO "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:46:41.113+01:00 INFO "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:46:41.115+01:00 INFO "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:46:41.331+01:00 INFO License check OK on hosts: Apr 28 2017 13:46:41.333+01:00 INFO "/DC/host/Cluster/esxi-dell-i.rainpole.com" Apr 28 2017 13:46:41.334+01:00 INFO "/DC/host/Cluster/esxi-dell-j.rainpole.com" Apr 28 2017 13:46:41.335+01:00 INFO "/DC/host/Cluster/esxi-dell-k.rainpole.com" Apr 28 2017 13:46:41.337+01:00 INFO "/DC/host/Cluster/esxi-dell-l.rainpole.com" Apr 28 2017 13:46:41.347+01:00 INFO DRS check OK on: Apr 28 2017 13:46:41.350+01:00 INFO "/DC/host/Cluster" Apr 28 2017 13:46:41.384+01:00 INFO Apr 28 2017 13:46:41.493+01:00 INFO Creating virtual app "corVCH01" Apr 28 2017 13:46:41.521+01:00 INFO Creating directory [isilion-nfs-01] VIC Apr 28 2017 13:46:41.527+01:00 INFO Datastore path is [isilion-nfs-01] VIC Apr 28 2017 13:46:41.528+01:00 INFO Creating appliance on target Apr 28 2017 13:46:41.533+01:00 INFO Network role "client" is sharing NIC with "public" Apr 28 2017 13:46:41.537+01:00 INFO Network role "management" is sharing NIC with "public" Apr 28 2017 13:46:42.515+01:00 INFO Uploading images for container Apr 28 2017 13:46:42.517+01:00 INFO "bootstrap.iso" Apr 28 2017 13:46:42.518+01:00 INFO "appliance.iso" Apr 28 2017 13:46:48.425+01:00 INFO Waiting for IP information Apr 28 2017 13:47:03.785+01:00 INFO Waiting for major appliance components to launch Apr 28 2017 13:47:03.860+01:00 INFO Obtained IP address for client interface: "10.27.51.41" Apr 28 2017 13:47:03.862+01:00 INFO Checking VCH connectivity with vSphere target Apr 28 2017 13:47:03.935+01:00 INFO vSphere API Test: https://vcsa-06.rainpole.com vSphere API target responds as expected Apr 28 2017 13:47:08.483+01:00 INFO Initialization of appliance successful Apr 28 2017 13:47:08.484+01:00 INFO Apr 28 2017 13:47:08.485+01:00 INFO VCH Admin Portal: Apr 28 2017 13:47:08.486+01:00 INFO https://10.27.51.41:2378 Apr 28 2017 13:47:08.487+01:00 INFO Apr 28 2017 13:47:08.489+01:00 INFO Published ports can be reached at: Apr 28 2017 13:47:08.490+01:00 INFO 10.27.51.41 Apr 28 2017 13:47:08.491+01:00 INFO Apr 28 2017 13:47:08.492+01:00 INFO Docker environment variables: Apr 28 2017 13:47:08.493+01:00 INFO DOCKER_HOST=10.27.51.41:2376 Apr 28 2017 13:47:08.499+01:00 INFO Apr 28 2017 13:47:08.500+01:00 INFO Environment saved in corVCH01/corVCH01.env Apr 28 2017 13:47:08.502+01:00 INFO Apr 28 2017 13:47:08.503+01:00 INFO Connect to docker: Apr 28 2017 13:47:08.504+01:00 INFO docker -H 10.27.51.41:2376 --tls info Apr 28 2017 13:47:08.506+01:00 INFO Installer completed successfully
Success! I now have my docker endpoint, and I can provide this to my developers for the creations of “containers in VMs”. Let’s see if it works with a quick check/test:
C:\Users\chogan\Downloads\vic>docker -H 10.27.51.41:2376 --tls info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: v1.1.0-9852-e974a51 Storage Driver: vSphere Integrated Containers v1.1.0-9852-e974a51 Backend Engine VolumeStores: corvols vSphere Integrated Containers v1.1.0-9852-e974a51 Backend Engine: RUNNING VCH CPU limit: 155936 MHz VCH memory limit: 423.9 GiB VCH CPU usage: 0 MHz VCH memory usage: 5.028 GiB VMware Product: VMware vCenter Server VMware OS: linux-x64 VMware OS version: 6.5.0 Plugins: Volume: vsphere Network: bridge Swarm: inactive Operating System: linux-x64 OSType: linux-x64 Architecture: x86_64 CPUs: 155936 Total Memory: 423.9GiB ID: vSphere Integrated Containers Docker Root Dir: Debug Mode (client): false Debug Mode (server): false Registry: registry-1.docker.io Experimental: false Live Restore Enabled: false C:\Users\chogan\Downloads\vic>
That all seems good. Let’s run my first container:
C:\Users\chogan\Downloads\vic>docker -H 10.27.51.41:2376 --tls run -it busybox Unable to find image 'busybox:latest' locally Pulling from library/busybox 7520415ce762: Pull complete a3ed95caeb02: Pull complete Digest: sha256:32f093055929dbc23dec4d03e09dfe971f5973a9ca5cf059cbfb644c206aa83f Status: Downloaded newer image for library/busybox:latest / #
Excellent. Now a few other things to point out with VIC 1.1. You might remember features like Admiral and Harbor which I discussed in the past. These are now completely embedded. Simply point your browser at the IP Address:8282 of the VIC OVA that you previously deployed, and you will get Admiral. This can be used for the orchestrated deployment of “Container as VM” templates. These templates can be retrieved from either docker hub or your own local registry for VIC, i.e. Harbor.
And to access harbor, simply click on the “Registry” field at the top of the navigation screen:
You can look back on my previous post on how to use admiral and harbor for orchestrated deployment and registry respectively. Let’s finish this post with one last command, which is the command I started with to list VCHs.
C:\Users\chogan\Downloads\vic>vic-machine-windows.exe ls --target vcsa-06.rainpole.com \
--user administrator@vsphere.local --password "xxx" \
--thumbprint 4B:A0:D1:84:92:DD:BD:38:07:E3:38:01:4B:0C:F1:14:E7:5D:5B:00
May 2 2017 11:13:09.002+01:00 INFO ### Listing VCHs ####
May 2 2017 11:13:09.178+01:00 INFO Validating target
ID PATH NAME VERSION UPGRADE STATUS
vm-36 /DC/host/Cluster/Resources corVCH01 v1.1.0-9852-e974a51 Up to date
C:\Users\chogan\Downloads\vic>
Now my VCH is listed.
Again, I’m only touching the surface on what VIC can do for you. If you want to give your developers the ability to use containers, but wish to maintain visibility into container resources, networking, storage, CPU, memory, etc, then maybe VIC is what you need. I’ll try to some more work with VIC 1.1 over the coming weeks. Hopefully this is enough to get you started.
14 Replies to “Getting started with VIC v1.1”
Comments are closed.
Thanks again Cormac, regarding the folk/instant-clone – I *REALLY* thought it was using that, I’d thought I’d spoken to GeorgeH about this face to face in fact – did something change?
I *believe* that was the plan, but there were interop issues. Prob worth closing the loop with George on it, as I’m not quite so involved these days.
As far as I know, instant clone disables the vmotion capability of the container VMs, that would contradict one major benefit of vic, mobility. That might be the reason bootstrapping vms is still the way to go.
Hey Cormac, great post. Is there a way to remove the single point of failure that is the VCH?
You should be able to leverage vSphere HA to avoid underlying infrastructure issues impacting the VCH Mikel.
But is there something else you would like to see added? Multiple VCH/docker endpoints per resource pool? Fault Tolerance?
I’d be interested in hearing more if you have time.
I haven’t had the time to test it properly yet, but I wonder what happens if a host running containers and the VCH fails. I assume that containers can only be restarted after the VCH?
Is there still no way to mount nfs shares? I want to use VIC but this is the only thing that kept me from doing it, I want to directly mount nfs shares from my NAS as folders.
BTW I don’t wanna use an NFS share as volume store, I actually want to mount the nfs share as folder within my container since I also access some of the data on other hosts, for example to quickly edit a configuration on my desktop computer.
Based on feedback from an earlier query, this doesn’t look like it made it to v1.1. I would confirm via the VIC pages on github however.
Well damn, I hoped they would finally implement this, it would be enough if they would enabled mounting host folders, so I could just install nfs-utils and mount the share within the vic host machine, but since mounting host folders is also not supported with vic I’m screwed..
The thing that bothers me the most, is that if I use volume stores to persist my data, I don’t have any easy way to access the files and the volumes always have a limited size.. I can’t even create a container that serves the files within these volume stores since shared volumes also aren’t supported god dammit..
I guess I still have to wait until vic finally supports these features.. Maybe I’ll try using Docker on a Photon OS vm as a standalone Docker host, if I get the netshare plugin to work with Photon OS.
I’ll let the VIC team know that you would like to see this supported sooner rather than later.
That would be great, thank you!