Tanzu Mission Control – VMworld 2019 Updates

After spending some time watching, digesting and then writing about Project Pacific Deep Dive updates from VMworld 2019, the next item on my to-do list was to get up to speed on VMware Tanzu, or to be more specific, Tanzu Mission Control. The reason I am being more specific is that VMware Tanzu is a broad portfolio of products and features which can be categorized into 3 distinct areas. These areas are Build, Run and Manage. The Build category related to initiatives taking place in the developer space, notably with Bitnami and Pivotal, the former having recently been acquired by…

Project Pacific – VMworld 2019 Deep Dive Updates

I’m sure most readers will be somewhat familiar with VMware’s Project Pacific at this point. It really is the buzz of VMworld 2019. If I had to describe Project Pacific in as few words as possible, it is a merging of vSphere and Kubernetes (K8s) with the goal of enabling our customers to deploy new, next-gen, distributed, modern applications which may be comprised of container workloads or combined container and virtual machine workloads. Not only that, we also need to provide our customers with a consistent way of managing, monitoring and securing these new modern applications. This is where Project…

CNS – not just for vSAN

After a very eventful VMworld, we received lots of questions about CNS, the Cloud Native Storage feature that was released with vSphere 6.7U3. Whilst most of the demonstrations and blog articles around CNS focused on vSAN, what may have been missed is that this feature also works with both VMFS and NFS datastores. For that reason, I decided to create some examples of how CNS can also bubble up information in vSphere about Kubernetes Persistent Volumes (PVs) created on both VMFS and NFS datastores. Let’s begin by creating some simple policies to tag my VMFS datastore and my NFS datastore.…

Safekeeping – a useful tool for interacting with First Class Disks/Improved Virtual Disks

I have been doing quite a bit of work on First Class Disks (FCD), also known as Improved Virtual Disks (IVD) over the past number of months. One tool that has been extremely useful in improving my understanding of FCDs has been safekeeping, a tool developed by Max Daneri of VMware and which is now available to download on GitHub. If you did not know, FCDs are used extensively in VMware’s new Cloud Native Storage (CNS) offering that is currently available with vSphere/vSAN 6.7U3. Now, whilst the primary aim of this tool is to help backup vendors become familiar with…

Video of HCI Spotlight Session from #VMworld now available #HCI3551KE

This week at VMworld in Barcelona, I was honored to  be able to  co-present the HCIBU Spotlight Session with our GM and SVP, John Gilmartin. I noticed that the full video is now available online on the VMworld Video site. If you want to learn more about how to Future Proof your Infrastructure with vSAN and VMware Cloud Foundation, give it a watch. The cool demos, showing Cloud Native Storage, Site Recovery Manager support for vVols and Project Magna auto-tuning vSAN all start around the 30 minute mark. The full video is available here. Enjoy!

Finding VMDK path from PV VolumeHandle

I’ve been looking at ways in which we could query the mappings of objects between the Kubernetes layer and the vSphere layer. One thing that I really wanted to figure out is if I have the VolumeHandle from the Persistent Volume in Kubernetes, could I easily find the datastore and path using PowerCLI. It looks like I can. Let’s begin with a look at the Persistent Volume or PV  for short. Note that this is a Kubernetes cluster that is using the new vSphere CSI driver. 

Moving a Stateful App from VCP to CSI based Kubernetes cluster using Velero

Since the release of the vSphere CSI driver in vSphere 6.7U3, I have had a number of requests about how we plan to migrate applications between Kubernetes clusters that are using the original in-tree vSphere Cloud Provider (VCP) and Kubernetes clusters that are built with the new vSphere CSI driver. All I can say at this point in time is that we are looking at ways to seamlessly achieve this at some point in the future, and that the Kubernetes community has a migration design in the works to move from in-tree providers to the new CSI driver as well.…