Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.
- When upgrading from ESXi 4.x to ESXi 5.x, NFS datastores with spaces in their names are deleted and would need to be manually added back to the host. This is now fixed.
- If Storage I/O Control is enabled on an NFS datastore, it must be disabled on the datastore before you unmount it.
- Our Software FCoE (Fibre Channel over Ethernet) Adapter doesn’t handle VLAN ID 4095 (a reserved ID) very well, and can cause FCoE discovery to fail. We fixed this.
- The esxcfg-scsidev -a command now displays the virtual link state with Emulex Fibre Channel over Ethernet Converged Network Adapters. Before, we displayed the physical link state.
- Storage Maps don’t display if Software FCoE is configured after the ESXi host is added to the vCenter server. The workaround is to configure Software FCoE before adding the ESXi host to the vCenter server.
- When adding a new VMDK to a VM which resides on a Storage DRS datastore cluster, Storage DRS uses Storage vMotion to move VMDKs between the datastores to ensure that there is enough free space for the new VMDK (when there is not enough space on a single datastore). Although the Storage vMotion operations are successful, the operation can still fail with an “Insufficient Disk Space” error. This is now fixed.
- Storage DRS uses data patterns to determine I/O characteristics and performance of the datastore. IOMeter populates its files with zeroes by default. To accurately test Storage DRS with IOMeter, use a version of IOMeter to write random patterns of ones and zeroes to your files (or else use real data to populate at least 20 percent of the storage space on the datastore).
Care must be taken when adding extents to a VMFS datastore. If you already have an ATS (Atomic Text & Set) capable device and you try to extend the VMFS with a non-ATS device, the operation will fail with “Operation failed, unable to add extent to filesystem“. Use the command ‘esxcli storage core device vaai status get -d=device_ID’ to check extent candidates and verify that the ATS Status displays supported.
- If a snapshot operations fails near the end of the task, redo logs created as part of the operation are not consolidated. This is now fixed.
- If you try to create a quiesced snapshot of a VM running Windows Server 2008 R2 and specify a working directory, it may fail with “Failed to quiesce the virtual machine“. Similarly, if you vMotion a VM running Windows Server 2008 R2 from ESXi 4.0 to ESXi 5.0, and then do a Storage vMotion operation before rebooting the VM it may also fail. Both of these are fixed in this release.
- In vCenter Server, a virtual machine’s disk type cannot be changed when a Storage vMotion task is performed on a virtual machine to a datastore cluster. This is now fixed.
- When using vCenter 5.0 to manage ESX/ESXi 4.x hosts, certain migration & cloning tasks fail at 99% with “A general system error occurred: Configuration information is inaccessible.” This is now fixed.
- Last, but not least, a lot of customers used Storage vMotion to rename virtual machines files in 4.x. In 5.0, this behaviour changed. In vCenter Server, when you rename a virtual machine in the vSphere Client, the VMDK disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion of the virtual machine to have its folder and associated files renamed to match the new name, the virtual machine folder name changes, but the virtual machine file names do not change. This functionality is now back in this release.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage