Kubernetes Storage on vSphere 101 – NFS revisited

In my most recent 101 post on ReadWriteMany volumes, I shared an example whereby we created an NFS server in a Pod which automatically exported a File Share. We then mounted the File Share to multiple NFS client Pods deployed in the same namespace. We saw how multiple Pods were able to write to the same ReadWriteMany volume, which was the purpose of the exercise. I received a few questions on the back on that post relating to the use of Services. In particular, could an external NFS client, even one outside of the K8s cluster, access a volume from…

PKS and NSX-T: Error: Timed out pinging after 600 seconds

I’m still playing with PKS 1.3 and NSX-T 2.3.1 in my lab. One issue that I kept encountering was that when on deploying my Kubernetes cluster, my master and worker nodes kept failing with a “timed out” trying to do a ping. A bosh task command showed the errors, as shown here. cormac@pks-cli:~$ bosh task Using environment ‘192.50.0.140’ as client ‘ops_manager’ Task 845 Task 845 | 16:56:36 | Preparing deployment: Preparing deployment Task 845 | 16:56:37 | Warning: DNS address not available for the link provider instance: pivotal-container-service/0c23ed00-d40a-4bfe-abee-1c Task 845 | 16:56:37 | Warning: DNS address not available for the…

Reviewing PKS logs and status

After a bit of a sabbatical, I am back to looking PKS (Pivotal Container Service) again. I wanted to look at the new version 1.3, but I had to do a bit of work on my environment to allow me to do this. Primarily, I needed to upgrade my NSX-T environment from version 2.1 to 2.3. I followed this blog post from vmtechie which provides a useful step-by-step guide. Kudos to our VMware NSX-T team as the upgrade worked without a hitch. My next step was to start work on the PKS deployment. I just did a brand new deployment…

Integrating NSX-T and Pivotal Container Services (PKS)

If you’ve been following along my recent blog posts, you’ll have seen that I have been spending some time ramping up on NSX-T and Pivotal Container Services (PKS). My long term goal was to see how these two products integrate together and to figure out the various moving parts. As I was very unfamiliar with both products, I took a piece-meal approach to both. First, I tried to get some familiarity with NSX-T. You can find my previous posts on NSX-T here: Building a simple ESXi host overlay network with NSX-T First steps with NSX-T Edge – DHCP Server Next…

Next steps with NSX-T Edge – Routing and BGP

If you’ve been following along on my NSX-T adventures, you’ll be aware that at this point we have our overlay network deployed, and our NSX-T edge has been setup to with DHCP servers attached to my logical switch, which in turn provides IP addresses to my virtual machines. This is all fine and well, but I’d also like these VMs to reach the outside world. NSX-T enables this through a feature called logical routers. In this post, I will talk you through how to configure a tier 0 logical router which connects to the outside world, a tier 1 logical router…

First Steps with NSX-T Edge – DHCP server

Now that we have an overlay network deployed, its time to turn our attention to the NSX-T Edge, and get it to do something useful for us. A NSX-T Edge can do many useful things for you (Routing, NAT’ing, etc). But I really want to keep things as simple as possible, so I will deploy my NSX-T Edge to provide DHCP addresses to my VMs. In order to do this, my Edge will first of all need to participate in the same overlay/tunnel network as my hosts. I will then need to create a logical switch that my VMs can…

Building a simple ESXi host overlay network with NSX-T

I’ve recently begun to look at NSX-T. My long-term goal is to use it to enable me to build multiple Kubernetes clusters used PKS, the Pivotal Container Service. The hope is then to look at some cool storage related items with Kubernetes. But first things first. Kudos to both Sam McGeown and William Lam for their excellent blogs on NSX-T. However, I’m coming at this as a newbie, and I’m not using a nested environment, but rather a 4 nodes physical environment in my lab. And I am also not separating my cluster into management and production, but rather using…