Helm Chart for vSphere CSI driver
After recently presenting on the topic of the vSphere CSI driver, I received feedback from a number of different people that the current install mechanism is a little long-winder and prone to error. The request was for a Helm Chart to make things a little easier. I spoke to a few people about this internally, and while we have some long term plans to make this process easier, we didn’t have any plans in the short term. At that point, I reached out to my colleague and good pal, Myles Gray, and we decided we would try to create our own Helm Chart as a stop-gap measure to help both our field and customers. So I am delighted to share with you our new Helm Chart for the vSphere CSI driver. (At the moment, this is on my personal GitHub repo, but hopefully it will be moved to a centralized VMware repo soon).
Whilst the README file has a bunch of information related to how to deploy the Helm Chart, I wanted to use this post to provide some additional details.
Initial Install
The Helm Chart is currently stored on my github repository, so in order to access it, you will have to add that repository and then run an update to get the latest charts, as shown here.
% helm repo add cormachogan https://cormachogan.github.io/vsphere-csi-helmchart "cormachogan" has been added to your repositories % helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "cormachogan" chart repository Update Complete. ⎈ Happy Helming!⎈ % helm repo list NAME URL cormachogan https://cormachogan.github.io/vsphere-csi-helmchart
% tree vsphere-csi-helmchart vsphere-csi-helmchart ├── README.md ├── _config.yml ├── charts │ └── vsphere-csi │ ├── Chart.yaml │ ├── OWNERS │ ├── README.md │ ├── charts │ │ └── vsphere-cpi-0.1.3.tgz │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── clusterrole.yaml │ │ ├── clusterrolebinding.yaml │ │ ├── csi-daemonset.yaml │ │ ├── csi-deployment.yaml │ │ ├── csi-driver.yaml │ │ ├── secret.yaml │ │ └── serviceaccount.yaml │ └── values.yaml └── index.yaml 4 directories, 17 files
The important things to point out are the files in vsphere-csi-helmchart/charts/vsphere-csi/templates. These are all of the manifest files that make up the CSI driver deployment. For the most part, these are static. However, we do need to configure the csi-vsphere.conf configuration file in the secret manifest, so these values are passed in at the command line when the helm chart is deployed. We will see how to do this shortly.
Another interesting thing to note is the inclusion of vsphere-csi-helmchart/charts/vsphere-csi/charts/vsphere-cpi-0.1.3.tgz. This places a dependency on deploying the vSphere CPI, the Cloud Controller Manager or Cloud Provider Interface. Both the CPI and the CSI are needed when running Kubernetes on vSphere, so this way both can be installed simultaneously.
Initial Test
Before deploying the CSI driver, there are some useful helm commands to test the deployment. For example:
% helm template --debug vsphere-csi cormachogan/vsphere-csi
This command will display all of the manifests that would be run. When run without any arguments, it simply picks up the default entries that we have placed in the csi-vsphere.conf file manifest:
<snip> --- # Source: vsphere-csi/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: vsphere-config-secret namespace: kube-system labels: helm.sh/chart: vsphere-csi-0.0.3 app.kubernetes.io/name: vsphere-csi app.kubernetes.io/instance: vsphere-csi app.kubernetes.io/version: "2.0.0" app.kubernetes.io/managed-by: Helm stringData: csi-vsphere.conf: | [Global] cluster-id = "cluster" [VirtualCenter "vcenter.local"] user = "administrator@vsphere.local" password = "pass" port = "443" insecure-flag = "1" ca-file = "ca" datacenters = "datacenter" <snip>
However the same command can be run with the configuration values enabled so that you can test placing appropriate values in the configuration file, as follows:
% helm template --debug vsphere-csi cormachogan/vsphere-csi \
--set config.vcenter=vcsa-01.rainpole.com
And you should see the values provided (in this case VirtualCenter) updated as follows:
<snip> --- # Source: vsphere-csi/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: vsphere-config-secret namespace: kube-system labels: helm.sh/chart: vsphere-csi-0.0.3 app.kubernetes.io/name: vsphere-csi app.kubernetes.io/instance: vsphere-csi app.kubernetes.io/version: "2.0.0" app.kubernetes.io/managed-by: Helm stringData: csi-vsphere.conf: | [Global] cluster-id = "cluster" [VirtualCenter "vcsa-01.rainpole.com"] user = "administrator@vsphere.local" password = "pass" port = "443" insecure-flag = "1" ca-file = "ca" datacenters = "datacenter" <snip>
Another way to do a sample test install is to include the –dry-run –debug options. This will check to make sure that there are no existing configuration settings that will conflict with the CPI and CSI driver deployment. Here are some examples:
% helm install --dry-run --debug vsphere-csi cormachogan/vsphere-csi install.go:172: [debug] Original chart version: "" install.go:189: [debug] CHART PATH: /Users/chogan/Library/Caches/helm/repository/vsphere-csi-0.0.3.tgz NAME: vsphere-csi LAST DEPLOYED: Tue Sep 1 11:07:09 2020 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: {}
Here is one with a value configured on the command line:
% helm install --dry-run --debug vsphere-csi cormachogan/vsphere-csi \ --set config.vcenter=vcsa-01.rainpole.com install.go:172: [debug] Original chart version: "" install.go:189: [debug] CHART PATH: /Users/chogan/Library/Caches/helm/repository/vsphere-csi-0.0.3.tgz NAME: vsphere-csi LAST DEPLOYED: Tue Sep 1 11:08:46 2020 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: config: vcenter: vcsa-01.rainpole.com
This command also displays the manifests in the output. If it is successful, it also displays the contents of the NOTES.txt file in the templates folder.
NOTES: # Verify that CSI has been successfully deployed # To verify that the CSI driver has been successfully deployed, you should observe that there is one instance of the vsphere-csi-controller running on the master node and that an instance of the vsphere-csi-node is running on each of the worker nodes. $ kubectl get deployment --namespace=kube-system $ kubectl get daemonsets vsphere-csi-node --namespace=kube-system # Verify that the vSphere CSI driver has been registered with Kubernetes # $ kubectl describe csidrivers # Verify that the CSINodes have been created # $ kubectl get CSINode
We are now ready to do the install.
Installation
Since we are deploying both the CPI and CSI, values for the vsphere.conf (CPI) and csi-vsphere.conf (CSI) must both be provided at the command line. Here is an example taken from one of my lab environments:
% helm install vsphere-csi cormachogan/vsphere-csi --namespace kube-system \ --set config.enabled=true \ --set config.vcenter=10.27.51.106 \ --set config.password=VMware1\! \ --set config.datacenter=Datacenter \ --set config.clusterId=CH-K8s-Cluster \ --set vsphere-cpi.config.enabled=true \ --set vsphere-cpi.config.vcenter=10.27.51.106 \ --set vsphere-cpi.config.password=VMware1\! \ --set vsphere-cpi.config.datacenter=Datacenter \ --set netconfig.enabled=false NAME: vsphere-csi LAST DEPLOYED: Tue Sep 1 11:28:52 2020 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: # Verify that CSI has been successfully deployed # To verify that the CSI driver has been successfully deployed, you should observe that there is one instance of the vsphere-csi-controller running on the master node and that an instance of the vsphere-csi-node is running on each of the worker nodes. $ kubectl get deployment --namespace=kube-system $ kubectl get daemonsets vsphere-csi-node --namespace=kube-system # Verify that the vSphere CSI driver has been registered with Kubernetes # $ kubectl describe csidrivers # Verify that the CSINodes have been created # $ kubectl get CSINode %
Above, I have provided vCenter credentials for both the CSI and CPI. Going forward, we should be able to use a global value for both. Another thing to note is that I have explicitly disabled the netconfig values. These are used when you want to use the CSI driver to dynamically consume vSAN File Services. This feature is currently only available on CSI 2.0 in upstream Kubernetes. If you decide that you want to consume vSAN File Services for read-write-many PVs, you can update the deployment as follows (note the use of upgrade –install):
% helm upgrade --install vsphere-csi cormachogan/vsphere-csi --namespace kube-system \ --set config.enabled=true \ --set config.vcenter=10.27.51.106 \ --set config.password=VMware1\! \ --set config.datacenter=Datacenter \ --set config.clusterId=CH-K8s-Cluster \ --set vsphere-cpi.config.enabled=true \ --set vsphere-cpi.config.vcenter=10.27.51.106 \ --set vsphere-cpi.config.password=VMware1\! \ --set vsphere-cpi.config.datacenter=Datacenter \ --set netconfig.enabled=true \ --set netconfig.ips='*' \ --set netconfig.permissions=READ_WRITE \ --set netconfig.rootsquash=true \ --set netconfig.datastore=ds:///vmfs/volumes/vsan:52e2cfb57ce8d5d3-c12e042893ff2f76/ Release "vsphere-csi" has been upgraded. Happy Helming! NAME: vsphere-csi LAST DEPLOYED: Tue Sep 1 11:37:18 2020 NAMESPACE: kube-system STATUS: deployed REVISION: 3 TEST SUITE: None NOTES: # Verify that CSI has been successfully deployed # To verify that the CSI driver has been successfully deployed, you should observe that there is one instance of the vsphere-csi-controller running on the master node and that an instance of the vsphere-csi-node is running on each of the worker nodes. $ kubectl get deployment --namespace=kube-system $ kubectl get daemonsets vsphere-csi-node --namespace=kube-system # Verify that the vSphere CSI driver has been registered with Kubernetes # $ kubectl describe csidrivers # Verify that the CSINodes have been created # $ kubectl get CSINode
Note that this helm chart only supports a single netconfig entry. If you want to add additional netconfig entries so that there are different parameters for different networks, you will have to revert to the manual deployment mechanism and hand-edit the csi-vsphere.conf.
Checking the csi-vsphere.conf entries
Finally, you may want to check that the csi-vsphere.conf entries are actually correctly added. You can do this by displaying the contents of the secret, and then unencoding the data field, as follows:
% kubectl get secret -n kube-system -o yaml vsphere-config-secret apiVersion: v1 data: csi-vsphere.conf: W0dsb2JhbF0KY2x1c3Rlci1pZCA9ICJDSC1LOHMtQ2x1c3RlciIKW1ZpcnR1YWxDZW50ZXIg\ IjEwLjI3LjUxLjEwNiJdCnVzZXIgPSAiYWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsIgpwYXNzd29yZCA9ICJWTXdh\ cmUxMjMhIgpwb3J0ID0gIjQ0MyIKaW5zZWN1cmUtZmxhZyA9ICIxIgpjYS1maWxlID0gImNhIgpkYXRhY2VudGVycyA9\ ICJEYXRhY2VudGVyIgoKdGFyZ2V0dlNBTkZpbGVTaGFyZURhdGFzdG9yZVVSTHMgPSAiZHM6Ly8vdm1mcy92b2x1bWVz\ L3ZzYW46NTJlMmNmYjU3Y2U4ZDVkMy1jMTJlMDQyODkzZmYyZjc2LyIKCgpbTmV0UGVybWlzc2lvbnMgIkEiXQoKaXBzI\ D0gIioiCnBlcm1pc3Npb25zID0gIlJFQURfV1JJVEUiCnJvb3RzcXVhc2ggPSAidHJ1ZSIK kind: Secret metadata: annotations: meta.helm.sh/release-name: vsphere-csi meta.helm.sh/release-namespace: kube-system creationTimestamp: "2020-09-01T10:28:51Z" labels: app.kubernetes.io/instance: vsphere-csi app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: vsphere-csi app.kubernetes.io/version: 2.0.0 helm.sh/chart: vsphere-csi-0.0.3 name: vsphere-config-secret namespace: kube-system resourceVersion: "32121023" selfLink: /api/v1/namespaces/kube-system/secrets/vsphere-config-secret uid: 22845fd1-01ea-4106-9878-996004dcbc15 type: Opaque
Next, take the base64 encoded data and unencode it. It should display the populated csi-vsphere.conf:
% echo "W0dsb2JhbF0KY2x1c3Rlci1pZCA9ICJDSC1LOHMtQ2x1c3RlciIKW1ZpcnR1YWxDZW50ZXIgIjEwLjI3Lj\ UxLjEwNiJdCnVzZXIgPSAiYWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2FsIgpwYXNzd29yZCA9ICJWTXdhcmUxMjMh\ Igpwb3J0ID0gIjQ0MyIKaW5zZWN1cmUtZmxhZyA9ICIxIgpjYS1maWxlID0gImNhIgpkYXRhY2VudGVycyA9ICJEYX\ RhY2VudGVyIgoKdGFyZ2V0dlNBTkZpbGVTaGFyZURhdGFzdG9yZVVSTHMgPSAiZHM6Ly8vdm1mcy92b2x1bWVzL3Zz\ YW46NTJlMmNmYjU3Y2U4ZDVkMy1jMTJlMDQyODkzZmYyZjc2LyIKCgpbTmV0UGVybWlzc2lvbnMgIkEiXQoKaXBzID\ 0gIioiCnBlcm1pc3Npb25zID0gIlJFQURfV1JJVEUiCnJvb3RzcXVhc2ggPSAidHJ1ZSIK" | base64 -d [Global] cluster-id = "CH-K8s-Cluster" [VirtualCenter "10.27.51.106"] user = "administrator@vsphere.local" password = "VMware123!" port = "443" insecure-flag = "1" ca-file = "ca" datacenters = "Datacenter" targetvSANFileShareDatastoreURLs = "ds:///vmfs/volumes/vsan:52e2cfb57ce8d5d3-c12e042893ff2f76/" [NetPermissions "A"] ips = "*" permissions = "READ_WRITE" rootsquash = "true"
Deleting the CSI Helm Chart
And of course, if you make a mistake, you can simple remove it and start again.
% helm delete vsphere-csi --namespace kube-system release "vsphere-csi” uninstalled
Kudos once again to Myles for his guidance and help on this. Happy Helming!
Thanks Cormac and Myles for putting this together, what a time saver.