This is something I only learnt about very recently, and something I was unaware of. It seems that we have made a major improvement to the way we do snapshot consolidation in vSphere 6.0. Many of you will be aware of the fact that when they VM is very busy, snapshot consolidation may need to go through multiple iterations before we can successfully complete the consolidation/roll-up operation. In fact, there are situations where the snapshot consolidation operation could even fail if there is too much I/O. What we did previously is used a helper snapshot, and redirected all the new…
This is a new feature in vSphere 6.0 that I only recently became aware of. Prior to vSphere 6.0, all the I/Os from a given virtual machine to a particular device would share a single I/O queue. This would result in all the I/Os from the VM (boot VMDK, data VMDK, snapshot delta) queued into a single per-VM, per-device queue. This caused I/Os from different VMDKs interfere with each other and could actually hurt fairness. For example, if a VMDK was used by a database, and this database issued a lot of I/O, this could compete with I/Os from the…
A very quick “public service announcement” post this morning folks, simply to bring your attention to a new knowledge base article that our support team have published. The issue relates to APD (All Paths Down) which is a condition that can occur when a storage device is removed from an ESXi host in an uncontrolled manner. The issue only affects ESXi 6.0. The bottom line is that even though the paths to the device recover and the device is online, the APD timeout continues to count down and expire, and as a result, the device is placed in APD timeout…
I hadn’t realized that we had now begun to use the LVM (Logical Volume Manager) in our vCenter Server Appliance (VCSA) version 6.0. Of course, I found out the hard way after a network outage in our lab brought down our VCSA which was running on NFS. On reboot, the VCSA complained about file system integrity as follows:
A short post today, but it highlights what I feel is an important enhancement to vSphere licensing. I’ve had lots of questions recently about why VAAI (Storage APIs for Array Integration) is not available in the standard edition of vSphere. This is especially true since I began posting about Virtual Volumes earlier this year, and it was clear that Virtual Volumes is available in the standard edition. One reason why this was confusing is that if a migration of a VVol could not be handled by the array using the VASA APIs, the migration would fall back to using VAAI…
A few weeks, my good pal Cody Hosterman over at Pure Storage was experimenting with VAAI and discovered that he could successfully UNMAP blocks (reclaim) directly from a Guest OS in vSphere 6.0. VAAI are the vSphere APIs for Array Integration. Cody wrote about his findings here. Effectively, if you have deleted files within a Guest OS, and your VM is thinly provisioned, you can tell the array through this VAAI primitive that you are no longer using these blocks. This allows the array to reclaim them for other uses. I know a lot of you have been waiting for…
The more astute of you who have already moved to vSphere 6.0, and like looking at CLI outputs, may have observed some new columns/fields in the PSA claimrules when you run the following command: # esxcli storage core claimrule list –claimrule-class=VAAI The new fields are as follows (slide right to view full output): XCOPY Use Array XCOPY Use XCOPY Max Reported Values Multiple Segments Transfer Size ————— —————– ————– false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false false 0 false …