[Updated] This is a very short post as I only learnt about this recently myself. I thought it was only available in vSphere 5.5 but it appears to be in vSphere 5.1 too. Anyhow Storage DRS now has a new setting that allows you to configure the default VM affinity setting. Historically, VMDKs from the same virtual machine were always kept together on the same datastore by default; you had to set a VMDK anti-affinity rule to keep them apart. Now you can set a default for this option, which can either be to keep VMDKs together on the same datastore or to keep VMDKs apart on different datastores.
In my recent post about the new large 64TB VMDKs available in vSphere 5.5, I mentioned that one could not hot-extend a VMDK (i.e. grow the VMDK while the VM is powered on) to the new larger size due to some Guest OS partition formats not being able to handle this change on-the-fly. The question was whether hot-extend was possible if the VMDK was already 2TB or more in size. I didn’t know the answer, so I decided to try a few tests on my environment. Continue reading
Regular readers will know that I’ve spent a lot of time recently posting around VSAN. But VSAN wasn’t the only announcement at VMworld 2013. We also announced the next release of vSphere – version 5.5. I now want to share with you a number of new storage enhancements which we have made in this latest release of vSphere. To begin with, we will look at a long-awaited feature, namely the ability to have virtual machine disk files that are larger than 2TB, the traditional maximum size of VMDKs.
I’ve blogged about the VMFS heap situation numerous times now already. However, a question that I frequently get asked is what actual happens when heap runs out? I thought I’d put together a short article explaining the symptoms one would see when there is no VMFS heap left on an ESXi host. Thanks once again to my good friend and colleague, Paudie O’Riordan, for sharing his support experiences with me on this matter – “together we win”, right Paud?
I was first introduced to Raxco Software when I wrote an article on the vSphere Storage Blog related to fragmentation on Guest OS file systems. In that post, I wanted to highlight some side effects of running a defragment operation on the file system in the Guest OS (actually, primarily the Windows defragger). Raxco reached out to say that they had a product that would actually prevent fragmentation occurring in the first place, which was rather neat I thought. Bob Nolan, Raxco’s CEO reached out to me again recently to let me know about a new product that they were launching on the market (on April 23rd, 2013). If you’re looking for a solution to reclaim dead space from within a Guest OS, then read on.
I had a query about this recently, and actually it is a topic that I have not looked at for some time. Those of you configuring virtual machine disks may have seen references to these different configuration options and may have wondered how they affect the behavior of the virtual machine. Read on to find out the subtleties between Independent Persistent Mode and Independent Non-persistent Mode disks, and what impact they may have.
This is a follow-up to my previous post on the 5.0U2. At the same time, VMware also released vCenter 5.1.0b. This post will look at the storage items which were addressed in that update, although the issues that are addressed in the storage space are relatively minor compared to the enhancements made in other areas. Note that this update is for vCenter only – there is no ESXi 5.1 update.