Selecting a particular portgroup for frameworks on Photon Controller

Continuing my education on Photon Controller, I was trying to figure out how I would select a particular VM network (port group) for containers to use when I was deploying a particular framework on top of Photon Controller. Let’s say for instance that I had two VM Networks, one using VLAN 51 and another using VLAN 30. Initially I thought the frameworks would choose the default “VM Network” but quickly realized this was not the case. How then would I select the correct one for my framework?

Resetting the Photon Controller Deployer configuration

As mentioned previously, I’m spending some time these days working on the Photon Controller product. Right now, I’m just familiarizing myself with it as much as possible. As I try different things, and test various options, I find that I repeatedly need to reset the Photon Controller Deployer to allow me to start a new Photon Controller. The Deployer is simply used to roll-out Photon Controller initially. It is not needed after that initial deployment step. In case you are involved in something similar, I added the steps here. Hopefully you will find them useful.

Photon Controller – Image Issues – 413 Request Entity Too Large

Over the last couple of days I’ve been getting to grips with Photon Controller v0.8. For those of you who do not follow developments in our Cloud Native Apps BU, Photon Controller leverages ESXi hosts to provide compute and management for containers at large scale. It will also allow you to stand up container frameworks such as Kubernetes, Docker Swarm and Mesos very quickly. I’m not going to take you through the step-by-step instructions on how to do this as my colleague and good pal William Lam has already done this. Instead, I’m going to try to highlight some newbie…

Expanding on VSAN 2-node, 3-node and 4-node configuration considerations

I spent the last 10 days in the VMware HQ in Palo Alto, and had lots of really interesting conversations and meet-ups, as you might imagine. One of those conversations revolved around the minimum VSAN configurations. Let’s start with the basics. 2-node: There are two physical hosts for data and a witness appliance hosted elsewhere. Data is placed on the physical hosts, and the witness appliance holds the witness components only, never any data. 3-node: There are three physical hosts, and the data and witness components are distributed across all hosts. This configuration can support a number of failures to…

FlashSoft I/O Filter VAIO Setup Steps

Last week I had the opportunity to drop down to San Jose and catch up with our friends on the FlashSoft team at SanDisk. In case you were not aware, this team has been developing a cache acceleration I/O filter as part of the VAIO program (VAIO is short for vSphere APIs for I/O Filters). SanDisk were also one of the design partners chosen by VMware for VAIO. This program allows for our partners to plug directly into the VM I/O path, and add third-party data services, such as replication, encryption, quality of service and so on. An interesting observation…

Check out the new VSAN 6.2 Hands-On-Lab

HOL-SDC-1608, our VSAN hands-on-lab, has been updated for VSAN version 6.2. This lab contains a bunch of new VSAN 6.2 features including erasure coding (RAID-5/6), checksum, sparse swap and dedupe/compression. You can also see the new health check views, performance metric views and capacity views. Also included is a workflow that will guide you through configuring VSAN stretched cluster and remote-office/branch-office (ROBO) implementations, and how these features work with HA to restart VMs in the event of a failure. The whole lab is modularised, so you can simply look at the features that interest you. You can get access via…

Recovering from a full VSAN datastore scenario

We had an interesting event happen on one of our lab servers this weekend. One of the hosts in our four node cluster hit an issue, which meant that the storage on that host was no longer available to the VSAN datastore. Since VSAN auto-heals, it attempted to re-protect as many VMs as possible. However, since we chose to ignore one of the health check warnings to do with limits, we ended up with a full VSAN datastore.