Note: I am using non-GA versions of software to build this post. The screenshots used here may change in the vSphere 8.0 launch, and in subsequent releases.
1. Retrieve the callback URL
The first step is to retrieve the callback URL for your Supervisor cluster via the vSphere client. You need this in order to federate Pinniped on the Supervisor with the Identity Provider. The callback URL is available under Workload Management > Supervisor > “Your Supervisor Cluster” > Configure > Identity Providers.
2. Configure OpenID Connect in your Identity Provider
The next step is implemented on the Identity Provider. This involves creating a new OpenID Connect (OIDC). The OIDC should use a token based OAuth 2.0 authentication. The callback URL retrieved in step 1 is used for the sign-in redirect URL. If you use a common IDP such as okta, there are some good pointers on the Pinniped site detailing how to integrate with the Supervisor cluster. Okta allows you to sign up for a free 30 day trial of their products if you want to do a proof-of-concept. This is what I have used here. It provides the ability to build a trial ORG and create the required OIDC. After completing the configuration, the IDP should provide OAuth 2.0 client details such as Client ID and Client Secret. These are now used to configure the Identity Provider back on vSphere. That is the next step.
3. Add the Identity Provider Details to vSphere
Return to Workload Management > Supervisor > “Your Supervisor Cluster” > Configure > Identity Providers. Click on the + sign to add a Provider. With the Client ID and the Client Secret from the IDP, we can add information to enable Pinniped on the Supervisor Cluster to federate with the IDP. Here is a sample configuration which I set up in my lab. As you can see from the Issuer URL, I am using a trial from okta.
I have also set the Username Claim to email. This is to make it easier to identify the name of the developer or platform operator in the Kubernetes logs. I also set the groups claim to groups; this matches the groups claim name that I configured in okta. If these claims are not set, it is very difficult to identify the users in the logs. Instead of an identifier like email@example.com, users would appear as a reference to the issuer URL and client ID, which is not very helpful (see snippet below):
Error: unable to get list of workload clusters managed by current management cluster: \ unable to retrieve combined cluster info: unable to get list of clusters: failed to list *v1beta1.ClusterList: \ clusters.cluster.x-k8s.io is forbidden: \ User "https://trial-3621070.okta.com?sub=00u23mto1zbcZdPrj697" cannot list resource "clusters" \ in API group "cluster.x-k8s.io" in the namespace "cormac-ns"
Continuing with the remaining setup steps, we must add the OAuth 2.0 Client Details. Add the Client ID and Client Secret from the IDP.
I have not selected any additional settings for the purposes of this setup. Therefore, that completes the Provider setup on the vSphere client. Simply review the configuration and click “Finish”. The Pinniped application running on the Supervisor cluster should now be federated with the IDP, and any attempts to access Supervisor or TKCs via the tanzu CLI should result in a prompt for credentials by the IDP. In fact, support for tanzu CLI is another important feature to highlight with vSphere with Tanzu in 8.0. We will see how to use it in just a moment.
4. Add Users to vSphere Namespace Permissions
Now that the IDP and Pinniped on the Supervisor cluster are federated, users are added from the IDP to the Permissions on the vSphere Namespace. In this example, I am adding the user firstname.lastname@example.org to the vSphere Namespace cormac-ns. This is the username that I have registered with the okta project.
5. Supervisor cluster access using IDP
Now for the final step. I will access my Supervisor cluster using an IDP authenticated user via tanzu CLI. To achieve this, I am using the new tanzu login –endpoint method rather than the kubectl-vsphere vSphere SSO login. The tanzu CLI command will cause Pinniped to contact the OIDC IDP and authenticate the user. This will prompt the user to login to the IDP using valid credentials. Once the user has logged into the IDP and the credentials are validated, Pinniped is informed and the kubeconfig is created. This integration allows for the use of secure, externally managed identities instead of relying on simple, shared credentials. The sequence of steps is outlined in the following diagram.
Let’s now see how this works in practice. The first step is to use tanzu login –endpoint to access the Supervisor cluster as mentioned. The –name option provided is used to build the kubeconfig entry. This is what should happen.
$ tanzu login --endpoint https://192.168.0.33 --name pinniped-sv-cjh ℹ Detected a vSphere Supervisor being used /usr/bin/xdg-open: 851: /usr/bin/xdg-open: www-browser: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: links2: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: elinks: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: links: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: lynx: not found /usr/bin/xdg-open: 851: /usr/bin/xdg-open: w3m: not found xdg-open: no method available for opening 'https://192.168.0.33/wcp/pinniped/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=6F0ZtcYWp_Erz7twHk95M0UyYgbSv8hEmgTxkomTpdg&code_challenge_method=S256&nonce=72fc136beb1cba7d667b011fcdb7b7e9&redirect_uri=http%3A%2F%2F127.0.0.1%3A43901%2Fcallback&response_mode=form_post&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=172592e45059db9b29508bcb4678e657' E0902 14:23:31.838759 5836 login.go:578] "msg"="could not open browser" "error"="exit status 3" Log in by visiting this link: https://192.168.0.33/wcp/pinniped/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=6F0ZtcYWp_Erz7twHk95M0UyYgbSv8hEmgTxkomTpdg&code_challenge_method=S256&nonce=72fc136beb1cba7d667b011fcdb7b7e9&redirect_uri=http%3A%2F%2F127.0.0.1%3A43901%2Fcallback&response_mode=form_post&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=172592e45059db9b29508bcb4678e657 Optionally, paste your authorization code:
Note that the xdg-open messages are caused by running this command on a headless host that has been SSH’ed to. Thus, it is unable to launch a browser. However, in cases like this, the next step is to copy the link provided via the “Log in by visiting this link” message. This will connect to the IDP configured in step 3 and prompt for user credentials. If the user credentials are valid, then access is granted and the appropriate kubeconfig is built. If they are not valid, then access is denied. These steps, if integrated with an IDP from okta for example, may look something like this. The process may even require a verification code depending on the sort of authentication that is configured.
If authentication is verified, then you should receive an authorization code similar to the following.
As it says, copy this code and paste into the tanzu login command line session. This should enable the authenticated user to then access the Supervisor cluster, e.g.
Optionally, paste your authorization code: G2TcS145Q4e6A1YKf743n3BJlfQAQ_UdjXy38TtEEIo.ju4QV3PTsUvOigVUtQllZ7AJFU0YnjuLHTRVoNxvdZc ✔ successfully logged in to management cluster using the kubeconfig pinniped-sv-cjh Checking for required plugins... All required plugins are already installed and up-to-date $ kubectl get nodes NAME STATUS ROLES AGE VERSION 420a70053430ec6cde0e50a16896f52a Ready control-plane,master 26h v1.23.5+vmware.wcp.2 420ac68e3f57b089d9584389c9e9431b Ready control-plane,master 26h v1.23.5+vmware.wcp.2 420ae37b164ffd8763f97e1610b2ec54 Ready control-plane,master 26h v1.23.5+vmware.wcp.2
As shown here, it is now possible for an external IDP controlled, non-vSphere SSO user to query the state of the Supervisor cluster. As shown above, this user can query the nodes in the Supervisor cluster.
Let’s now look at how the same user can access a Tanzu Kubernetes cluster.
6. Tanzu Kubernetes cluster access using IDP
To demonstrate this behaviour, I will create a new TKC cluster, and then try to query it.
$ tanzu cluster create -f classy1a-multi-az-ubuntu-network.yaml -v 8 compatibility file (/home/cormac/.config/tanzu/tkg/compatibility/tkg-compatibility.yaml) already exists, skipping download BOM files inside /home/cormac/.config/tanzu/tkg/bom already exists, skipping download You are trying to create a cluster with kubernetes version '' on vSphere with Tanzu, Please make sure virtual machine image for the same is available in the cluster content library. Do you want to continue? [y/N]: y Validating configuration... Waiting for the Tanzu Kubernetes Cluster service for vSphere workload cluster waiting for cluster to be initialized... . . . successfully reconciled package: 'classy-ub-01a-vsphere-pv-csi' in namespace: 'vmware-system-tkg' successfully reconciled package: 'classy-ub-01a-antrea' in namespace: 'vmware-system-tkg' successfully reconciled package: 'classy-ub-01a-vsphere-cpi' in namespace: 'vmware-system-tkg' Workload cluster 'classy-ub-01a' created $
Note that the IDP user is now authorized to query the newly created guest cluster; we no longer need to append the –admin to the tanzu cluster kubeconfig command (that option continues to be available if customers wish to get admin level access to the cluster). But non-admin access achieved by the Pinniped components on the TKC communicating with the Pinniped on the Supervisor cluster. This feature of Pinniped gives users a consistent, unified login experience across all your clusters, including both Supervisor and TKCs.
$ tanzu cluster kubeconfig get classy-ub-01a -n cormac-ns ℹ You can now access the cluster by running 'kubectl config use-context tanzu-cli-classy-ub-01a@classy-ub-01a' $ kubectl config use-context tanzu-cli-classy-ub-01a@classy-ub-01a Switched to context "tanzu-cli-classy-ub-01a@classy-ub-01a". $ kubectl get nodes NAME STATUS ROLES AGE VERSION classy-ub-01a-cslfd-7mdxr Ready control-plane,master 105s v1.23.8+vmware.2 classy-ub-01a-cslfd-hgmbs Ready control-plane,master 4m58s v1.23.8+vmware.2 classy-ub-01a-cslfd-t4fl2 Ready control-plane,master 7m39s v1.23.8+vmware.2 classy-ub-01a-node-pool-1-w6ghx-685577d794-knlj5 Ready <none> 3m52s v1.23.8+vmware.2 classy-ub-01a-node-pool-2-7zcc8-5d999d988d-wnl6z Ready <none> 4m56s v1.23.8+vmware.2 classy-ub-01a-node-pool-3-sxdvr-584657c9df-t9jhw Ready <none> 6m10s v1.23.8+vmware.2
Success! The authenticated user can access the TKC and there was no need to use the –admin context. One point to make here is that this user is vSphere namespace bound. This user does not have access to query cluster scoped objects, as shown below.
$ tanzu cluster list Error: unable to retrieve combined cluster info: unable to get list of clusters: failed to list *v1beta1.ClusterList: clusters.cluster.x-k8s.io is forbidden: User "email@example.com" cannot list resource "clusters" in API group "cluster.x-k8s.io" at the cluster scope Error: exit status 1 ✖ exit status 1 $ tanzu cluster list -n cormac-ns NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN TKR classy-ub-01a cormac-ns running 3/3 3/3 v1.23.8+vmware.2 <none> v1.23.8---vmware.2-tkg.2-zshippable
Similarly, if firstname.lastname@example.org is removed from the list of users with permissions on the vSphere Namespace, i.e. we skipped step 4, then the user would be unable to query any objects in the cormac-ns vSphere Namespace.
Now if I try to interact with the cluster as this IDP authenticated user, I am unable to do so.
$ kubectl get nodes Error from server (Forbidden): nodes is forbidden: User "email@example.com"cannot list resource "nodes"in API group ""at the cluster scope
This authentication mechanism, along with the support for tanzu CLI, is a very nice addition to vSphere with Tanzu in vSphere 8.0, and something many customers have been waiting for.