VSAN 6.2 Part 7 – Capacity Views

If you’ve been following my series on VSAN 6.2 blog posts, you’ll be aware of a considerable number of new features, especially around space efficiency, such as deduplication and compression. On top of this, there is a new on-disk format (v3) and a new software checksum mechanism. All of these features introduce some capacity overhead in their own right, so as to allow administrators track where the storage consumption is occurring a brand new capacity view has been introduced with VSAN 6.2.

capacity - with dedupeIf we focus on the Capacity Overview first of all, we can see the full size of the VSAN datastore. This is 20.38TB in size. We can also see Deduplication and compression overhead, but if you want further overhead detail, File system overhead and the Checksum overhead are displayed in further detail in the Used Capacity breakdown in the lower half of the screen. We will take a look at that shortly.

The amount of space Used – Total on the VSAN datastore refers to how much space has been physically written (as opposed to logical size). This is a combination of Virtual disks, VM home objects, Swap objects, Performance management objects and Other items that may reside on the datastore. Other items could be ISO images, unregistered VMs, or templates, for example.

Note that the values displayed for VM objects in the Used Capacity breakdown in the lower half of the screen, when grouped by object types, are related to Used space, and are from before deduplication and compression have been taken into account. There is no information available at this point in time to determine how much space the objects are consuming after deduplication and compression have done their processing.

However, that is not to say that we cannot tell how much space is being saved by these space efficiency features. The Deduplication and Compression overview on the top right give administrators and idea around the space savings and dedupe ratio that is being achieved, as well as the amount of space that might be required if an administrator decided that they wanted to disable the space efficiency features on VSAN and re-inflate any deduplicated and compressed objects.  The space savings ratio increases with the more “similar” VMs that are deployed. Here is another sample where we deployed 100s of VMs all running the same Guest OS. Not bad, eh?

vsan-62-dedupe-30xThis is telling us that without deduplication and compression, it would have required ~11TB of capacity to deploy the current workloads. With dedupe and compression, we’ve achieved it with ~400GB. One thing to also keep in mind that the “Used Before” value also includes replicas (RAID-1) and parity (RAID-5/6), which is only visible when you flip to the Group by data types view, which we will discuss shortly. This can also tell you how much capacity would be required if you disabled dedupe and compression, and re-inflated the VMs to their actual size. Always refer to this view if you ever plan to do such an action, and ensure there is enough capacity available.

Here is a description of some of the objects seen in the capacity view:

used capacity breakdown - vmemGroup by Object Types:

  • Performance management objects: Capacity consumed by objects created for storing performance metrics when Performance service is enabled
  • File system overhead: Any overhead taken up by the on-disk file system (VirstoFS) on the capacity drives, which is neither attributed to deduplication, compression or checksum overhead. When deduplication and compression is enabled, file system overhead is increased 10X to reflect the increase in the logical size of the VSAN datastore.
  • Deduplication and compression overhead: Overhead incurred to get the benefits of deduplication and compression. This includes the associated mapping table, hash tables, and other mechanisms required for deduplication and compression.
  • Checksum overhead: Overhead to store all the checksums. When deduplication and compression are enabled, checksum overhead is increased 10X to reflect the increase in the logical size of the VSAN datastore.

When a VM and a template are deployed on the VSAN datastore, more objects appear:

  • Virtual disks: Capacity consumed by Virtual machine disks (VMDKs) objects that reside on the VSAN datastore
  • VM home objects: Capacity consumed by VM home namespace objects (containing virtual machine files) residing on the VSAN datastore
  • Swap objects: Capacity consumed by VM Swap space that reside on the VSAN datastore
  • Vmem – Capacity consumed by memory Objects, created as a result of taking a snapshot of the VM which included the VM memory, or from suspended virtual machines. Note that this will only be visible on VMs that are using a minimum of Virtual Hardware V10.
  • Other: Capacity consumed by VM templates, unregistered VMs, standalone VMDKs not associated with VMs, manually created VSAN objects, manually created directories that store ISOs for example.

Next, lets look at the data types breakdown, which is another view one can have:

used capacity breakdown - data typesGroup by Data Types:

  • Primary VM data: Capacity consumed by a virtual machine, including VM home namespace, VM swap and VMDK objects.
  • Virtual SAN overhead: Capacity consumed by replicas, witnesses, RAID 5/6 parity, etc
  • Temporary overhead – Capacity consumed temporarily when objects are being moved or rebuilt.

Used – VM Overreserved

If deduplication and compression are not enabled, you will see another field displayed in the Capacity overview part of the UI. This field is called Used – VM Overreserved. This field simply tells you that if you have decided to use a policy with object space reservation,  how much space has been reserved but is not in use. If this is a high value, then you might need to reconsider whether or not it is worthwhile reducing the object space reservation value and reclaim some of this space for other use. Note that with deduplication and compression enabled, object space reservation must be set to either 0 or 100%. It cannot have a value in-between.

For any questions about where capacity is being consumed, or how well dedupe/compression is working, this is the place to get that information. If there are some additional items that should be tracked from a capacity perspective, let me know and I will feed this back to the product managers and engineers.

One comment

Comments are closed.