In this third article in the series of backing up the vCloud Suite, we turn our attentions to NSX, VMware’s Network Virtualization product. Before starting, I should point out that NSX has a recommended way of backing up and restoring configuration information via the use of an FTP server, which you need to configure in your infrastructure to hold this exported metadata. However this exercise looks at how you might be able to use VDP to back up and restore an NSX configuration using image level backups. Once again, I wanted to see whether I could restore the NSX environment to a particular point in time, in-place and also by restoring to a new location. This is the same infrastructure that I used for backing up and restoring vCops and backing up and restoring vCAC and VCO. On this occasion, I was using NSX version 6.0.4, vCenter 5.5U1 and VDP version 188.8.131.52.
This post is a follow on to a previous post I did on vCops and VDP interop. In this scenario, I am going to try to use vSphere Data Protection (VDP), which is VMware’s Backup/Restore product, to back up and restore a vCloud Automation Center (vCAC) v6.0.1. and vCenter Orchestration (VCO) v5.5 deployment.
In this particular scenario, there are nine virtual machines making up my vCAC and VCO deployment. VCO has been deployed in a HA configuration, which accounts for two VMs. The others make up the DEM, Manager, Web, vCAC, SSO and various databases for vCloud Automation Center.
I was in a conversation with one of my pals over at Tintri last week (Fintan), and he observed some strange behaviour when provisioning VMs from a catalog in vCloud Director (vCD). When he disabled Fast Provisioning, he expected that provisioning further VMs from the catalog would still be offloaded via the VAAI-NAS plugin. All the ESXi hosts have the VAAI-NAS plugin from Tintri installed. However, it seems that the provisioning/cloning operation was not being offloaded to the array, and the ESXi hosts resources were being used for the operation instead. Deployments of VMs from the catalogs were taking minutes rather than seconds. What was going on?
I’m a bit late in bringing this to your attention, but there is a potential issue with VASA storage providers disconnecting from vCenter resulting in no VSAN capabilities being visible when you try to create a VM Storage Policy. These storage providers (there is one on each ESXi host participating in the VSAN Cluster) provide out-of-band information about the underlying storage system, in this case VSAN. If there isn’t at least one of these providers on the ESXi hosts communicating to the SMS (Storage Monitoring Service) on vCenter, then vCenter will not be able to display any of the capabilities of the VSAN datastore, which means you will be unable to build any further storage policies for virtual machine deployments (currently deployed VMs already using VM Storage Policies are unaffected). Even a resynchronization operation fails to reconnect the storage providers to vCenter. This seems to be predominantly related to vCenter servers which were upgraded to vCenter 5.5U1 and not newly installed vCenter servers.
Well, after almost 8 months of work, the VSAN book that I have been working on with Duncan Epping is finally available for general download. This is the first book I’ve written, and I’ll always be grateful for the guidance and mentoring I received from Duncan. I’m also extremely grateful to a number of people at VMware Press (Pearson) for their willingness to sponsor this project. There are also numerous people at VMware that deserve thanks for their input and support, and you’ll find them listed in the acknowledgements section of the book.
We’re hopeful that this book will provide a definitive resource to all your VSAN queries.
If you’d like to download the book, there are a couple of links below. The Amazon Kindle version is available by clicking on the link below:
Or if you’d prefer, the ebook version of Essential VSAN is also available from Pearsons. Click the book below for details:
We’ve seen a spate of incidents recently related to the HP Smart Array Drivers that are shipped as part of ESXi 5.x. Worst case scenario – this is leading to out of memory conditions and a PSOD (Purple Screen of Death) on the ESXi host in some cases. The bug is in the hpsa 184.108.40.206-1 driver and all Smart Array controllers that use this driver are exposed to this issue. For details on the symptom, check out VMware KB article 2075978.
HP have also released a Customer Advisory c04302261 on the issue.
This was a tricky one to deal with, as one possible step might be to roll back/downgrade the driver to an earlier version. Unfortunately, not only is this not supported (or documented), but you might also find that an older driver may not work with a newer storage controller. The good news is that HP now have a new version of the driver available which fixes the issue. Customers should upgrade to HP Smart Array Controller Driver (hpsa) Version 220.127.116.11-1 (ESXi 5.0 and ESXi 5.1) or Version 18.104.22.168-1 (ESXi 5.5). Details on where to locate the driver and how to upgrade it are located in their advisory. Think about doing this as soon as possible.