I’ve blogged about the VMFS heap situation numerous times now already. However, a question that I frequently get asked is what actual happens when heap runs out? I thought I’d put together a short article explaining the symptoms one would see when there is no VMFS heap left on an ESXi host. Thanks once again to my good friend and colleague, Paudie O’Riordan, for sharing his support experiences with me on this matter – “together we win”, right Paud?
I just got a notification about this myself today. Apparently there is some interoperability issues with VAAI (vSphere APIs for Array Integration) & EMC RecoverPoint on EMC VNX arrays. It looks like the VNX Storage Processor (SP) may reboot with Operating Environment Release 32 P204 in a RecoverPoint environment. EMC have just released today a technical advisory – ETA emc327099 – which describes the issue in more detail but is basically advising customers to disable VAAI on all ESXi hosts in the RecoverPoint environment while they figure this out. Hopefully it won’t take too long to come up with a…
Thanks to our friends over at EMC (shout out to Itzik), we’ve recently been made aware of a limitation on our UNMAP mechanism in ESXi 5.0 & 5.1. It would appear that if you attempt to reclaim more than 2TB of dead space in a single operation, the UNMAP primitive is not handling this very well. The current thought is that this is because we have a 2TB (- 512 byte) file size limit on VMFS-5. When the space to reclaim is above this size, we cannot create the very large temporary balloon file (part of the UNMAP process), and…
Those of you attending VMUG (VMware User Group) meetings in the US recently may have come across the guys from Proximal Data. They were at the Austin & Silicon Valley VMUGs & I believe they may even have had the key-note at the San Diego VMUG. I had the pleasure of meeting up with Rich Pappas (VP of Sales and Business Development) and storage veteran Rory Bolt (CEO) at VMware’s Partner Exchange this year. They gave me an overview of their new Autocache 1.1 features.
I was first introduced to Raxco Software when I wrote an article on the vSphere Storage Blog related to fragmentation on Guest OS file systems. In that post, I wanted to highlight some side effects of running a defragment operation on the file system in the Guest OS (actually, primarily the Windows defragger). Raxco reached out to say that they had a product that would actually prevent fragmentation occurring in the first place, which was rather neat I thought. Bob Nolan, Raxco’s CEO reached out to me again recently to let me know about a new product that they were…
Just thought I’d bring to your attention something that has been doing the rounds here at VMware recently, and will be applicable to those of you using QLogic HBAs with ESXi 5.x. The following are the device queue depths you will find when using QLogic HBAs for SAN connectivity: ESXi 4.1 U2 – 32 ESXi 5.0 GA – 64 ESXi 5.0 U1 – 64 ESXi 5.1 GA – 64 The higher depth of 64 has been this way since 24 Aug 2011 (the 5.0 GA release). The issue is that this has not been documented anywhere. For the majority of…
Last week, I had a chance to catch up with Brady Murray and Rex Walters of Tintri. Mostly this was a transfer of information, but the guys let me know that they are on the verge of announcing a new per-VM replication feature which they first demoed to me when I met Tintri at VMworld last year. This will be the main feature in Tintri’s new 2.0 launch.