Portworx, STORK and container volume snapshots

As I continue on my cloud native storage journey, I found myself looking at Portworx. The reason for this was down to the fact that Portworx provide a plugin for the Heptio Velero product, and I was interested to see how this behaved on top of my vSphere on-premises infrastructure. I’ve written about Velero a few times already, and done a few posts where I leveraged the Restic plugin for snapshot functionality. Thus, I wanted to see how Portworx achieved the same thing, and wanted to learn about bit more about STORK, Portworx’s Storage Orchestrator for Kubernetes. I’ve written about…

A closer look at VVol snapshot policies on Pure Storage with vSphere 6.7

I am in the very fortunate position of having access to a Pure Storage array, and this has been recently updated to support Virtual Volumes. With my new 6.7 vSphere cluster, I finally found some time to take a closer look at Virtual Volume (VVol) snapshots on the Pure array, something that I have been meaning to do for some time. For those of you who are new to Virtual Volumes (VVols), one of the major advantages is the granularity at which certain operations can now be done. In the past, we were always dealing with data services at the…

Cloning and Snapshots on vSAN when policy requirements cannot be met

I was looking into some behavior recently to assist one of our partners. He described a situation that they observed during proof-of-concept testing. I thought it would be of benefit to highlight this behavior in case you also observe it, and you are curious as to why it is happening. Let’s begin with a description of the test. The customer has a 7-node vSAN, and has implemented RAID-6 erasure coding for all VMs across the board. The customer isolated one host, and as expected, the VMs continued to run without issue. The customer was also able to clones virtual machine…

Snapshot Consolidation changes in vSphere 6.0

This is something I only learnt about very recently, and something I was unaware of. It seems that we have made a major improvement to the way we do snapshot consolidation in vSphere 6.0. Many of you will be aware of the fact that when they VM is very busy, snapshot consolidation may need to go through multiple iterations before we can successfully complete the consolidation/roll-up operation. In fact, there are situations where the snapshot consolidation operation could even fail if there is too much I/O. What we did previously is used a helper snapshot, and redirected all the new…

VSAN 6.0 Part 5 – new vsanSparse snapshots

There is a new snapshot format introduced in VSAN 6.0 called vsanSparse. These replace the traditional vmfsSparse format (redo logs). The vmfsSparse format was used when snapshots of VMs were taken in VSAN 5.5, and are also the format used when a snapshot is taken of a VM residing on traditional VMFS and NFS. The older vmfsSparse format left a lot to be desired when it came to performance and scalability. This KB article from our support team, indicating that no snapshot should be used for more than 72 hours, and snapshot chains should contain no more than 2-3 snapshots,…

More Virtual Volumes (VVols) and Snapshots goodness

Well, I got so many questions about my previous articles on a new way of doing snapshots with VVols that I decided to take the time and get even deeper into their behaviour. In this setup, I take a Windows 2008 Guest OS running in a virtual machine  deployed on an NFS datastore, and I compare it to an identical VM deployed on a VVol datastore. This is purely from looking at how we do snapshots. Remember with VVols, snapshots always run on the base disk, compared to the traditional way of doing snapshots where the VM always run on the…

vSphere 5.1 Storage Enhancements – Part 7: Storage vMotion

Let’s begin this post with a recap of the Storage vMotion enhancements made in vSphere 5.0. Storage vMotion in vSphere 5.0 enabled the migration of virtual machines with snapshots and also the migration of linked clones. It also introduced a new mirroring architecture which mirrors the changed disk blocks after they have been copied to the destination, i.e. we fork a write to both source and destination using mirror mode. This means migrations can be done in a single copy operation. Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy…