vSphere 6.0 Storage Features Part 8: VAAI UNMAP changes

recycle2A few weeks, my good pal Cody Hosterman over at Pure Storage was experimenting with VAAI and discovered that he could successfully UNMAP blocks (reclaim) directly from a Guest OS in vSphere 6.0. VAAI are the vSphere APIs for Array Integration. Cody wrote about his findings here. Effectively, if you have deleted files within a Guest OS, and your VM is thinly provisioned, you can tell the array through this VAAI primitive that you are no longer using these blocks. This allows the array to reclaim them for other uses. I know a lot of you have been waiting for this functionality for some time. However Cody had a bunch of questions and reached out to me to see if I could provide some answers. After conversing with a number of engineers and product managers here at VMware, here are some of the answers to the questions that Cody asked.

Q. Does VMware support UNMAPs directly from the Guest OS in vSphere 6.0.

Yes. The issuing UNMAPs directly from the Guest OS to reclaim space is supported in vSphere 6.0.

Q. What are the prerequisites required for UNMAP to run?

There are a number of prerequisites which must be met. These are:

  • VMDK must be thin provisioned
  • Virtual Machine Hardware version must be 11 (ESXi 6.0)
  • The advanced setting EnableBlockDelete must be set to 1
  • The Guest OS must be able to identify the disk as thin (Windows 2012 [updated 30-Oct-2015] uses the B2 mode page to achieve this)

Q. Does reclaiming only work for UNMAP? What about TRIM?

This is an “it depends” answer. TRIM is only used when the underlying device is an SSD – Solid State Disk. From what I understand, the TRIM gets converted to UNMAP in the I/O stack. However, there are some issues with TRIM getting converted into UNMAP. UNMAP work at certain block boundaries on  VMFS, whereas TRIM do not have such restrictions. So, certain TRIMs converted into UNMAPs may fail due to block alignment issues.

Q. Is this based on a VMware fling?

No. This is a feature of the Virtual Machine HW version 11, and functionality found in the Guest OS.

Q. Which Guest OS support UNMAP?

Currently, the only Guest OS which support this are Windows 2012 [Updated 30-Oct-2015]. Linux distributions check the SCSI version, and unless it is version 5 or greater, it does not send UNMAPs. We currently support only SCSI-2. There are further details in KB article 2112333.

Q. How do you trigger UNMAP from the Guest OS?

The point here is that UNMAP is triggered from the Guest OS, not from some VMware provided tool. Our understanding is that UNMAP is triggered by operations such as a defrag of the filesystem or a file deletion. Cody also mentioned in his blog post that an operation called “optimize drive” also seemed to work for him. This is a Microsoft tool, and we don’t have much insight into how it works, but one suspects that it behaves in a similar fashion to defrag.

Q. When might in-guest UNMAP not work?

  • If the advanced option is not set on ESXi
  • If the underlying disk is not thin
  • Virtual Hardware versions is not v11 or higher
  • If the Guest OS cannot identify the disk as thin, e.g. it does not support the B2 mode page (Windows 2012 [updated 30-Oct-2015])
  • If the Guest OS is Linux
  • If the registry key for UNMAP support is disabled

I know a lot of you have been asking for this functionality for some time. Enjoy!

26 Replies to “vSphere 6.0 Storage Features Part 8: VAAI UNMAP changes”

  1. > If the Guest OS is Linux

    Not even with the “discard” mount option for ext4 in RHEL 6 ?

    # man ext4
    [CUT]
    “discard/nodiscard”
    Controls whether ext4 should issue dis‐
    card/TRIM commands to the underlying block
    device when blocks are freed. This is useful
    for SSD devices and sparse/thinly-provisioned
    LUNs, but it is off by default until suffi‐
    cient testing has been done.
    [CUT]

  2. > When might in-guest UNMAP not work?
    > – If the Guest OS is Linux

    How is this behavior related to the command “fstrim” in RHEL 7 ?

    # man fstrim
    ” fstrim is used on a mounted filesystem to discard (or “trim”) blocks which are not in use by the filesystem. This is useful for solid-state drives (SSDs) and thinly-provisioned storage.”

    1. This is due to a limitation with the vSCSI version that we use, and not the Guest OS. The KB article referenced in the post has the details.

      1. Its a pitty that Vmware didn’t make the paravirtual driver capable of emulating higher SCSI protocoll and therefor get the benefits with that. That way the legacy LSI controllers could stay as today and the Vmware controlled could advance. Or implement a new paravirtual (v2) controller that does the same as you are required to have VM HW11 anyway. I’m sure there also would be other benefits outside of UNMAP from getting a more advanced scsi-controller in the system.

  3. Just confirming that this is true with VVol based VMs as well.

    Also what is the mechanism for this between the guest OS and the hypervisor, is it something within VMware Tools or does the virtual SCSI adapter have the ability to recognize and forward on UNMAP commands

    Thanks!

    1. Yep – VVols supports this too.

      Nothing in tools afaik Eric. We just forward the UNMAPs initiated from the Guest.

  4. Could you tell us your guess about this:
    Would it work if you used Raw LUNs for Linux VMs? (Provisioned as thin by an array that supports doing that)
    Or is the SCSI driver then still only reporting the old SCSI protocal version?

    1. At a guess, if the RAW LUN was thinly provisioned on the array, it “could” work. But I have not tested that.

      1. We are running several RHEL 6 servers with “Raw Device Mappings” to thin Compellent volumes.

        Using the mount option “discard” on ext4 file-systems the reclamation of deleted data works perfectly (and this is on VSphere 5.1).

        (We are still hoping that we can migrate from RDMs to vDisks some day and benefit from thin provisioning.)

  5. Hi Comac

    Regarding the thin setting on the VMDK for UNMAP to work.
    Running on 3PAR we normally have to set the Eager Zeroed on the disk due to performance enhancement. Best practice is not to use thin on thin volumes.

    So would we need to change the VMDKs on these system to thin and not using best practice to make UNMAP work? Or can VMware on a 3PAR using Eager Zeroed but on thin provisioned volumes from the storage still make UNMAP work?

    I can see Vvols solves this but getting all customers to that poins takes a while.

    ./Thanks

    1. In my experience, the Guest will only send UNMAPs if it detects that the disk is thin. So it doesn’t seem to matter what is on the array, but rather the format of the VMDK. So I do not think you can make it work with EZT or LZT disks, even if the underlying storage on the array is thin.

      1. I agree with Claus regarding the question why Thin provisioning is enforced to use the feature.
        As the storage to the VM is abstracted anyway, why not present it as HW11 storage as “Thin” to the VM OS no matter if it is marked as “Thin” “ThickLazy” or “ThickEager”. The later marking remediates the VMFS overprovisioning/latency problem and has it purpose that has nothing to do with UNMAP, as long as we use something else than VSAN.

        And long as VSAN is licensed separately I can see a market for other players that need VMFS filesystem.

        1. This is a quirke of the Guest OS. The Guest OS needs to see the disk as thin before it issues an UNMAP.

          Think about it though for a minute. If you’ve already asked for a disk to be pre-allocated space, you don’t want to shrink it. You’re not really worried about space efficiency (maybe its not an issue).

          I could think of a scenario where you ask for a thick disk, it gets deployed, you install the guest OS and it consumes 10%, then reclaim kicks in and you thick disk is now back at 10% of its original size – not what you want.

          When you are concerned about space efficiency, you use thin, and would be happy for space that is no longer used to be reclaimed, and the VMDK shrunk.

          I suspect this could be the reason for this requirement.

          1. Hi again.
            I don’t agree with you. Let me try to explain my thought without being presumptuous.

            After some afterthoughts I can agree on Thick Eager Zero disks not doing unmap to avoid the last percentage of latency due to pre-zero of reused blocks being released before.

            I could think of another way of solving this in VMFS world.

            Scenario (1) – As it is now in vSphere 6, thin VMs, Thin LUN, VM OS sees thin disks and does UNMAP accordingly.

            Scenario (2) – Thick Lazy zero VM, Thin Lun, VM OS should see thin disks. This would enable VM to release blocks in the thin provisioned LUN yet keep the VMFS from becoming overcommited. ESX passes the UNMAP to underlaying LUN yet keeps the VM logical size in VMFS the same way as today, the VMDK essentialy becoming “sparse”. Not much different from Thick Lazy zero VM act today except for UNMAP working.

            Scenario (3) – Thick Eager Zero VM. Thin Lun, AKA “3Par” and nowdays Dell Compellent and probably other SANs. Workaround for unmap would to be to change to Thick Lazy Zero as described in Scenario (2) or a flag that allows ESX to pass Unmap anyway.

            Scenario (4) – Thick Eager Zero VM, Tick Lun, UNMAP is not applicable and possibly only SSD-trim would where it is but I see that as a bordercase scenario as most tiering-systems probably relies on SSD-garbage-collection anyway.

  6. Do any of the Engineers or Product Managers have an answer for Cody’s fourth question?

  7. This feature does not matter what underlying storage is used as the vmfs??
    We use thin provisioning on ‘thick’ storage (not thin on thin).
    Thank you, Tom

      1. Does this work with vSAN? When I look at a disk on vSAN in the guest, it does not detect as ‘Thin’

    1. IIRC, it plays no roll in reclaim on VMFS any more. You can still reclaim dead space on VMFS using vmkfstool -y, or the newer esxcli unmap command.

Comments are closed.