Hot-Extending Large VMDKs in vSphere 5.5
In my recent post about the new large 64TB VMDKs available in vSphere 5.5, I mentioned that one could not hot-extend a VMDK (i.e. grow the VMDK while the VM is powered on) to the new larger size due to some Guest OS partition formats not being able to handle this change on-the-fly. The question was whether hot-extend was possible if the VMDK was already 2TB or more in size. I didn’t know the answer, so I decided to try a few tests on my environment.In my experiment I added a 3TB disk to my VM, then I tried to grow it to 4TB while the VM was still running. Note that there is no problem with adding disks while the VM is running; we are talking about growing a disk online. Unfortunately this did not succeed either:
Therefore, you cannot grow VMDKs which are less that 2TB to greater than 2TB on-the-fly with vSphere 5.5, nor can you grow VMDKs which are already > 2TB to a larger size on-the-fly. The growing of a VMDK in vSphere 5.5 to a size above 2TB can only be done when the VM is powered off.
18 Replies to “Hot-Extending Large VMDKs in vSphere 5.5”
Just a quick suggestion that the penultimate paragraph is a little confusing – the first sentence seems to suggest that hot-add is not supported *at all*, rather than not being supported to go from 2TB,
Ah, the blog ate my less-than and greater-than symbols as markup. Meant to say “… rather than not being supported to go from less-than 2TB to greater-than 2TB.”
The point I was trying to make was that if the VMDK is being grown to a size which is greater than 2TB, it cannot be done while the VM is powered on. It doesn’t matter if the original size is less than 2TB, or greater than 2TB. It simply cannot be grown while the VM is running.
Yup I completely get that – simply the wording of ‘Therefore, you cannot grow VMDKs which are less that 2TB on-the-fly with vSphere 5.5’ suggested to me that hot-add for disks smaller than 2TB was not possible *at all* in 5.5 (when of course that is not the case)
OK – let me rephrase it slightly. Thanks for highlighting.
Is this a limitation imposed by VMware or is it an error coming from the guest OS?
Wondering if it would work on at Linux guest OS.
It is something which we have imposed Mikael.
Ugh – now why would VMware impose that?
That’s like taking a two step forward (64TB disk – wohoo) and then one step back..(can’t extend while vm is running)
We imposed it because not all of the Guest OS could handle it – we wanted to avoid people breaking their Guest OS.
So my opinion on this, is that you shouldn’t try to ‘protect us from ourselves’ We rely HEAVILY on hot-extend of disks in our organization and have self-service workflows to allow end-users to extend disks. We have been waiting for a long time for >2TB disks and by not allowing us to do this is going to be a pain.
To me this is analogous to a storage array that allows extending of LUNs. Obviously, not all operating systems can handle this, and you do it at your own risk, but they don’t ‘stop’ you from doing it. Some provide warnings etc.. With VMware ‘essentially’ replacing my storage array for my guest OS, I NEED to have the ability to ‘take the training wheels’ off. Even if you at least allow it, say through the API/PowerCLI, it would be sufficient for us. (Maybe a flag or something that you need to add like this ‘Set-HardDisk -SizeGB 4096 -Allow2TBExtend’).
In the end, I understand trying to keep track of which OSs support it and which don’t, and if they are using a proper filesystem that can do it is a HUGE undertaking, so you are trying to protect ‘us’, but I would say, don’t worry about the crazy matrix, and leave the onus on the customer to test/decide if they will break their OS… (after all, we are still all doing backups right? 🙂
Just my 2 cents 🙂
Not every customer shares your belief here AK. We’ve had many times in the past where issues have occurred and we’ve been asked ‘why did the UI/CLI/API allow us to do that when it clearly breaks something’.
I’m making further inquiries about this – hopefully I can share more details around the sorts of problems we’ve seen in the near future.
Can you meantion the OS’es that doesn’t support this? Also, is there an advanced setting for changing this so it works for an OS that do support this?
Try it with a GPT initialized disk
I have to agree with AK on this one. If the only issue is uncertainty around OS support, wrap the feature in a warning or two or require us to set an advanced parameter in vCenter to allow large disk extension – but let us make that decision for ourselves.
Typically systems with humongous drives host critical applications that do not tolerate downtime well. I’d hate to be the one to have to tell the Exchange or data warehouse teams that we’ll have to have a service interruption just to add some more disk capacity.
However, if the problem has more to do with the intricacies of the VMFS or VMDK format rather than OS support that’s fine, but VMware should be clear about why this feature is not available and if there is a path toward making that functionality available in the future. That is an important distinction for me because if VMware is waiting for universal OS support for large disk extension we’ll never get there. If instead it’s something that just needs more testing or an update to VMFS or VMDK before being supported we know it’s in VMware’s court and who to work with to make this happen.
I get where you’re coming from.
But how would you feel telling your Exchange users or your Data Warehouse users that there is a Service Interruption while you restore after a corrupted filesystem issue? That’s what we’re faced with here.
As I said to AK, I’m making further inquiries.
Any further information here yet, Cormac?
Comments are closed.