Site icon CormacHogan.com

Creating developer users and namespaces (scripted) in TKG “Guest” Clusters

I’ve spent a lot of time recently on creating and building out vSphere with Tanzu environment, with the goal of deploying a Tanzu Kubernetes “guest” cluster. I frequently used the kubectl-vsphere command to logout of the Supervisor namespace context and login to the Guest cluster context. This allowed me to start deploying stateful and stateful apps in my Tanzu Kubernetes Guest cluster. I thought no more about this step until a recent conversation with my colleague Frank Denneman. He queried whether or not Kubernetes developers would actually have vSphere privileges to do this. It was a great question which led to some follow on conversations with Ben Corrie, one of our lead engineers working on vSphere with Tanzu. Ben told us that the product is designed with 3 personas in mind:

  1. vSphere Admin – responsible for creating namespaces and user management of Kubernetes cluster administrators
  2. Kubernetes Cluster Admin – responsible for K8S cluster lifecycle/registry management and user management of developers
  3. Developer – consumer of K8s clusters

The vSphere Administrator would obviously use the vSphere Client UI, as the ability to create and manage namespaces is available there. The K8s cluster administrator would then have access to the namespace via the kubectl-vsphere command, and from there can create TKG guest clusters for their developer team. So the expectation is that we would not have vSphere administrators dealing with YAML manifests, and we would not have developers requiring vSphere SSO credentials.

This definition of roles might certainly be true for the larger customers, who are running at scale. But I have my doubts that in smaller environments, there would be such a distinction between the role of vSphere Admin and Kubernetes Cluster Admin. My suspicion is that the role of a Kubernetes Cluster Admin is something that a vSphere Administrator will transition into, in much the same way as they transitioned into managing storage with vSAN and managed networking with NSX. I’d love to hear some thoughts on whether or not you agree with me on this, or if you feel that these are absolutely distinct roles in your organization.

Now – onto the crux of this post. Whether you are the Kubernetes Cluster Admin, or a combination of a vSphere Admin + K8s Cluster Admin, the one thing I struggled with was the creation of a K8s context + user + namespace privileges for a developer in the TKG. I did find one excellent article about granting users access to your Kubernetes cluster, but it was quite challenging as there were so many steps. I thought there must be a way to automate this. So I went ahead an put together a script that will create a unique user and namespace context and grants the user (i.e. developer) the ability to work only in their own namespace. In other words they have a RoleBinding giving them admin permissions in their own namespace but not a ClusterRoleBinding.

Here is an example output from the script where the context is currently set at the Tanzu Kubernetes guest cluster, and I am going to create a new user called “chogan” with its own namespace, “chogan-ns“. What you will notice is that the script expects an “Enter” to be hit between every step. This is simply so you can follow the flow. If you are happy that it works satisfactorily, you can run the script with a single argument, e.g. “./setup-k8s-user.sh auto” and it will run through all of the steps automatically with requiring any intervention.

% ./setup-k8s-user.sh
--------------------------------------------------------------------------------------------------
This script will  create a user and a namespace in the current Kubernetes cluster context but will
restrict the ability of a particular user to perform tasks to their own namespace.

The user will not be allowed to look at any cluster wide objects, but instead will only be able
to create, monitor, manage and delete Kubernetes objects in their own namespace.

Prerequisites:
- kubectl
- openssl
- awk
- sed
- a running Kubernetes cluster

Guidance:
- First run script without any command line options to understand the flow.
- If satisifed it is working, run script with any additional command line option to skip enter
   key requirement after every step, e.g. './setup-k8s-user.sh auto'
--------------------------------------------------------------------------------------------------

-- Step 0: Checking dependencies ...

Type in the name of the user (e.g. bob): chogan
Type in the name of the namespace that the user should work in (e.g. bob-ns): chogan-ns

*** Current context is tkg-cluster-1-18-5
*** Creating a new restricted namespace chogan-ns for user chogan

Hit enter to continue


-- Step 1: Delete older files from last run ...

Hit enter to continue


-- Step 2: Create key and certificate signing request (CSR) for chogan ...

Hit enter to continue


-- Step 3: Create a CertificateSigningRequest manifest with the CSR generated in step 2 ...

Hit enter to continue


-- Step 4: Display newly created CSR manifest chogan-k8s-csr.yaml

apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
  name: chogan-k8s-access
spec:
  groups:
  - system:authenticated
  request: ---snip---
  usages:
  - client auth

Hit enter to continue


-- Step 5: Create the CSR in Kubernetes ...

certificatesigningrequest.certificates.k8s.io "chogan-k8s-access" deleted
certificatesigningrequest.certificates.k8s.io/chogan-k8s-access created

Hit enter to continue


-- Step 6: Check Status of CSR, currently not approved, pending ...

NAME                AGE   SIGNERNAME                     REQUESTOR                         CONDITION
chogan-k8s-access   11s   kubernetes.io/legacy-unknown   sso:Administrator@vsphere.local   Pending

Hit enter to continue


-- Step 7: Approve CSR ...

certificatesigningrequest.certificates.k8s.io/chogan-k8s-access approved

NAME                AGE   SIGNERNAME                     REQUESTOR                         CONDITION
chogan-k8s-access   13s   kubernetes.io/legacy-unknown   sso:Administrator@vsphere.local   Approved,Issued

Hit enter to continue

-- Step 8: Retrieve User Certificate from K8s and store locally in chogan-k8s-access.crt ...

-----BEGIN CERTIFICATE-----
---snip---
-----END CERTIFICATE-----

Hit enter to continue


-- Step 9: Retrieve K8s Cluster CA Certificate and store locally in chogan-k8s-ca.crt...

-----BEGIN CERTIFICATE-----
---snip---
-----END CERTIFICATE-----

Hit enter to continue


-- Step 10: Create chogan's KUBECONFIG using CA Certificate...

Cluster "10.202.112.153" set.
Hit enter to continue


-- Step 11: Set user chogan credentials using client key and cert ...

User "chogan" set.
Hit enter to continue


-- Step 12: Create a context for chogan...

Context "chogan" created.
CURRENT   NAME     CLUSTER          AUTHINFO   NAMESPACE
          chogan   10.202.112.153   chogan     chogan-ns

Hit enter to continue


-- Step 13: Cleanup, Create and Label namespace (chogan-ns) ...

namespace/chogan-ns created

namespace/chogan-ns labeled

NAME                           STATUS   AGE
cassandra                      Active   2d18h
chogan-ns                      Active   2s
default                        Active   2d19h
kube-node-lease                Active   2d19h
kube-public                    Active   2d19h
kube-system                    Active   2d19h
vmware-system-auth             Active   2d19h
vmware-system-cloud-provider   Active   2d19h
vmware-system-csi              Active   2d19h

Hit enter to continue


-- Step 14: Set a context...

Switched to context "chogan".

CURRENT   NAME     CLUSTER          AUTHINFO   NAMESPACE
*         chogan   10.202.112.153   chogan     chogan-ns

Hit enter to continue


-- Step 15: Final Authentication Test...

Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.2-2+15451ee5f68207", \
GitCommit:"15451ee5f68207ebcf6fdea3f28e21bc3fca5b9a", GitTreeState:"clean", BuildDate:"2020-05-28T14:58:57Z", \
GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5+vmware.1", \
GitCommit:"1abde2b816bac0da89c6c71360799c681094ca0e", GitTreeState:"clean", BuildDate:"2020-06-29T22:31:51Z", \
GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

-- Congrats - chogan is now authenticated but is not authorized to do anything. Let's fix that next...

Hit enter to continue


-- Step 16: Authorization test 1 ... expected to not work...

Error from server (Forbidden): pods is forbidden: User "chogan" cannot list resource "pods" in API group "" \
in the namespace "chogan-ns"

Hit enter to continue


-- Step 17: Create a RoleBinding to allow chogan to do stuff in namespace chogan-ns ...

rolebinding.rbac.authorization.k8s.io/chogan-admin created

Hit enter to continue


-- Step 18: Authorization test part deux ... should now work...

No resources found in chogan-ns namespace.

Hit enter to continue


-- Step 19: Merge new config to .kube/config ...

-- Merging chogan's config with main config
-- Copying merged config to main config
-- Switching to new context chogan

Switched to context "chogan".

CURRENT   NAME                 CLUSTER          AUTHINFO                                         NAMESPACE
          10.202.112.152       10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local
*         chogan               10.202.112.153   chogan                                           chogan-ns
          cormac-ns            10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local   cormac-ns
          tkg-cluster-1-18-5   10.202.112.153   wcp:10.202.112.153:administrator@vsphere.local

-- Done ...

-- Note that you need to run kubectl commands as the actual user chogan even after
-- you have switched contexts. Otherwise you will be prompted for username and pwd.

At this point, user chogan now has his own namespace (chogan-ns) and can create objects within that namespace, but does not have access to even query other resources outside the scope of his namespace.

 % whoami
chogan

% kubectl apply -f ../PacificYAML/busybox-pod.yaml
pod/busybox1 created

% kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
busybox1   1/1     Running   0          6s

% kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "chogan" cannot list resource "namespaces" \
in API group "" at the cluster scope

I also include a script to remove the user, as well as the namespace. Here is an example of it running in auto mode as mentioned previously, which means it doesn’t require any user interaction. Since I am deleting my current context, I am first switching to the TKG guest cluster context before running it.

% kubectl config get-contexts
CURRENT   NAME                 CLUSTER          AUTHINFO                                         NAMESPACE
          10.202.112.152       10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local
*         chogan               10.202.112.153   chogan                                           chogan-ns
          cormac-ns            10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local   cormac-ns
          tkg-cluster-1-18-5   10.202.112.153   wcp:10.202.112.153:administrator@vsphere.local

% kubectl config use-context tkg-cluster-1-18-5
Switched to context "tkg-cluster-1-18-5”.

% ./remove-k8s-user.sh auto
--------------------------------------------------------------------------------------------------
This script will remove a user and a namespace in the current Kubernetes cluster context.

The user was originally created by the accompanying script, setup-k8s-user.sh.
Future plans will be to merge both scripts into a single entity with multiple options.

Prerequisites:
- kubectl
- a running Kubernetes cluster

Guidance:
- First run script without any command line options to understand the flow.
- If satisifed it is working, run script with any additional command line option to skip enter
   key requirement after every step, e.g. './remove-k8s-user.sh auto'

--------------------------------------------------------------------------------------------------

-- Step 0: Checking dependencies ...

Type in the name of the user (e.g. bob): chogan
Type in the name of the namespace that the user has privileges in (e.g. bob-ns): chogan-ns

*** Current context is tkg-cluster-1-18-5
*** Deleting user chogan in namespace chogan-ns


-- Step 1: Delete the RoleBinding for chogan to do stuff in namespace chogan-ns ...

rolebinding.rbac.authorization.k8s.io "chogan-admin" deleted


-- Step 2: Delete the CSR in Kubernetes ...

certificatesigningrequest.certificates.k8s.io "chogan-k8s-access" deleted


-- Step 3: Cleanup namespace (chogan-ns) ...

namespace "chogan-ns" deleted


-- Step 4: Delete a context for chogan...

deleted context chogan from /Users/chogan/.kube/config


-- Step 5: Delete files for user chogan  ...


-- Step 6: Check that namespace chogan-ns is now deleted ...

NAME                           STATUS   AGE
cassandra                      Active   2d18h
default                        Active   2d19h
kube-node-lease                Active   2d19h
kube-public                    Active   2d19h
kube-system                    Active   2d19h
vmware-system-auth             Active   2d19h
vmware-system-cloud-provider   Active   2d19h
vmware-system-csi              Active   2d19h


-- Step 7: Check that config for user chogan is now deleted ...


CURRENT   NAME                 CLUSTER          AUTHINFO                                         NAMESPACE
          10.202.112.152       10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local
          cormac-ns            10.202.112.152   wcp:10.202.112.152:administrator@vsphere.local   cormac-ns
*         tkg-cluster-1-18-5   10.202.112.153   wcp:10.202.112.153:administrator@vsphere.local
%

The scripts are available on GitHub here. Feel free to use, or add enhancements, as you wish. One last reminder – if you have any thoughts about the vSphere Administrator + Kubernetes Cluster Administrator, please share your thoughts. It will certainly help us shape the future direction of vSphere with Tanzu.

Exit mobile version