Thanks to our friends over at EMC (shout out to Itzik), we’ve recently been made aware of a limitation on our UNMAP mechanism in ESXi 5.0 & 5.1. It would appear that if you attempt to reclaim more than 2TB of dead space in a single operation, the UNMAP primitive is not handling this very well. The current thought is that this is because we have a 2TB (- 512 byte) file size limit on VMFS-5. When the space to reclaim is above this size, we cannot create the very large temporary balloon file (part of the UNMAP process), and it spews the following errors:
Followers of my blog will have seen a number of articles posted recently about storage vendors that I managed to catch up with at this year’s VMware Partner Exchange in Las Vegas. In the last in this series of articles, I managed to spend some time with the folks from GreenBytes. The timing was very opportune, as GreenBytes just made a major announcement to their portfolio, namely their new vIO, the virtual storage appliance version of their IO Offload Engine solution for desktop virtualization. I met up with Michael Robinson (VP, Marketing), Jeff Eberhard (Sr. Systems Engineer) and Steve O’Donnell (CEO) of GreenBytes to get the low-down on their current product offerings and to learn a bit more about their very recent vIO announcement.
Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.
I recently received a question about the following message appearing in the VMkernel logs of an ESXi host:
2012-12-07T12:15:58.994Z cpu17:420151)ScsiDeviceIO: 6340: Could not detect setting of sitpua for device naa.xxx. Error Not supported.
So what does that mean? Firstly, it isn’t anything to be greatly concerned about. SITPUA, short for Single Initiator Thin Provisioning Unit Attention, is related to Out Of Space (OOS) conditions on Thin Provisioned LUNs. To ensure that an Out Of Space (OOS) warning is sent to just one host using the affected LUN, the SITPUA bit in the Thin Provisioning Mode Page must be set to 1. If it is set to 0, all hosts with access to the OOS LUN is sent to all hosts.
Essentially, the bit makes it such that the ‘thin provisioning threshold exceeded’ warning is delivered either to a single initiator (bit set to 1) instead of being delivered to all initiators (bit set to 0). If it is delivered to all initiators, it will cause a warning storm in the vCenter events tab, i.e. duplicate warnings from all ESXi hosts sharing the affected datastore. The recommendation is to prevent this spam by having the device support the SITPUA bit in Thin Provisioning Mode Page.
This is beyond VMware’s control, but if you observe a flood of OOS warnings happening on multiple hosts from a single LUN that is OOS, then the solution to give to the vendor provider is documented here – support the SITPUA bit.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage
Welcome to part 3 of the NFS Best Practices series of posts. While part 1 looked at networking and part 2 looked at configuration options, this next post will look at interoperability with vSphere features. We are primarily interested in features which are in some way related to storage, and NFS storage in particular. While many of my regular readers will be well versed in most of these technologies, I’m hoping there will still be some items of interest. Most of the interoperability features are tried and tested with NFS, but I will try to highlight areas that might be cause for additional consideration.