This is an interesting announcement for those of you following emerging storage technologies. We’ve been talking about flash technologies for some time now, but for the most part flash has been either an SSD or PCIe device. Well, we now have another format – DIMM-based flash storage device. And VMware now supports it.
I was discussing this issue with a good friend of mine over at Tintri, Fintan Comyns. Fintan was seeing some strange behaviour with the cloning on Windows 2008 R2 Guest OS running in virtual machines using the Tintri VAAI-NAS plugin, and wanted to know if this behaviour was normal or not. Basically what he was seeing was that a clone operation of a virtual machine was not being offloaded. Rather, he was seeing two separate independent snapshots (snapshots that were not in a chain, but both pointing to the base VMDK) were getting created at the time of the cloning operation. Fintan also reported that if they used to the Sync Driver or stopped VMware Tools altogether in the Windows 2008 R2 Guest OS, the operation worked and the clone operation was being offloaded. The same operation was tried with a Windows 7 Guest OS running in a virtual machine, and in this case a single snapshot was created which was offloaded. so what was going on? It had us scratching our heads for a while.
Last week I had the opportunity to catch up with Mike Koponen and Dean Steadman of Fusion-io. I had met with Mike and Dean at VMworld 2013, and spoke to them about the Fusion-io acquisition of NexGen storage earlier last year, and what plans Fusion-io had for this acquisition. Well, the result is ioControl Hybrid Storage, and we discussed some of the architecture of ioControl as well as a number of vSphere integration points.
This is an issue which has caught a number of customers out during the Virtual SAN beta, so will probably catch some folks out when the product goes live too. One of the requirements for Virtual SAN (VSAN) is to allow multicast traffic on the VSAN network between the ESXi host participating in the VSAN Cluster. However, as per our engineering lead on VSAN, multicast is only used for relatively infrequent metadata operations. For example, object creation, change in object status after a failure and publication of statistics such as a significant change of free disk space (the publication of statistics is throttled so that only significant changes will cause an update, so these are also very infrequent events).
A short and sweet post today. In vSphere 5.0, VMware introduced support for 16Gb FC HBAs. However these HBAs had to be throttled down to work at 8Gb. In 5.1, VMware supported these 16Gb HBAs running at 16Gb. However, an important point to note is that there was no support for full end-to-end 16Gb connectivity from host to array in vSphere 5.1. To get full bandwidth, you possible had to configure a number of 8Gb connections from the switch to the storage array.
With the release of vSphere 5.5, VMware now supports 16Gb E2E (end-to-end) Fibre Channel.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan
For those of you participating in the VMware Virtual SAN (VSAN) beta, this is a reminder that there is a VSAN Design & Sizing Guide available on the community forum. It is part of the Virtual SAN (VSAN) Proof of Concept (POC) Kit, and can be found by clicking this link here. The guide has recently been updated to include some Host Memory Requirements as we got this query from a number of customers participating in the beta. The actual host memory requirement directly related to the number of physical disks in the host and the number of disk groups configured on the host. If you want to know more about disk groups, have a read of an article that I wrote about disk groups on the vSphere storage blog.
We at VMware have been making considerable changes to the way that the All Paths Down (APD for short) and PDL (Permanent Device Loss) conditions are handled. In vSphere 5.1, we introduced a number of enhancements around APD, including timeouts for devices that entered into the APD state. I wrote about the vSphere 5.1 APD improvements here. In vSphere 5.5 we introduced yet another improvement to this mechanism, namely the automatic removal of devices which have entered the PDL state from the ESXi host.