VMware certify new memory channel attached storage on vSphere 5.1 & 5.5
This is an interesting announcement for those of you following emerging storage technologies. We’ve been talking about flash technologies for some time now, but for the most part flash has been either an SSD or PCIe device. Well, we now have another format – DIMM-based flash storage device. And VMware now supports it.
SanDisk just made this recent announcement of the availability of their ULLtraDIMM. The ULLtraDIMM is built on Diablo Technologies’ DDR3 translation protocol (known as Memory Channel Storage™, or MCS™) licensed to SanDisk, using SanDisk’s own flash and controllers on the DIMM. The register has a good write up on the relationship here. Following on from that, the first server to support these DIMMs has been announced by IBM. Their new X6 architecture announced last month – you can read it here – supports integrated memory-channel storage using this new DIMM-based flash storage. Because this storage is so close to the processor, the I/O latency is significantly less than what we are used to. Quoting the IBM press release:
5-10 microseconds write latency for eXFlash DIMMs in preliminary testing vs. 15-19 microseconds latency for PCIe-based flash storage from Fusion IO, Micron, and Virident, and 65 microseconds latency for Intel S3500 and S3700 SSDs. (Pending final IBM performance testing.)
Yes – that’s correct. 5-10 microsecond write latency.
And now VMware has IOVP (I/O Vendor Partner Program) Certification for the Diablo MCS™ family of devices on both vSphere 5.1 and vSphere 5.5. My understanding is that the TeraDIMM is the original Diablo prototype of SANDisk’s ULLtraDIMM, though it is the ULLtraDIMM that has been certified by VMware. All certified products in the IOVP are listed on the VMware Compatibility Guide. These are listed under the I/O Devices section and the Diablo TeraDIMMs are shown as Memory Channel Attached Storage (MCAS).
Right now, there is only one vendor supported, and that is Diablo Technologies.
These devices require a special async driver from Diablo that does not pre-exist on the ESXi host. The drivers are available on the VMware download site. Clicking on the model in the VCG above will take you to the driver download on VMware.com:
The first thing I thought on hearing about this announcement was how soon can we qualify the TeraDIMM for Virtual SAN (VSAN). Of course I have lots of other question too, such as how it these devices present themselves to an ESXi host, how it plugs into our PSA (Pluggable Storage Architecture), does it support smartd, does it support vSphere Flash Read Cache, and so on. Exciting times once again in the storage space.
16 Replies to “VMware certify new memory channel attached storage on vSphere 5.1 & 5.5”
This can cause a serious double-take; Memory Channel was an old Digital interconnect used primarily by VMScluster systems. There’d be something amazing about it emerging from the historical mists in the context of VMWare…
I guess, but how many readers remember back that far? 🙂
This one does! We were still using memory channel on DEC Alphas until about 5-6 years ago. I think there’ll be more than a few 🙂
Of course I remember Memory Channel – and competing technologies like Quadruus (sp?) – remember them like dim yesterday. Oh! To have the regularity of the Alpha instruction set once again.
Yes, but you’re close to retirement now Lance, right? LOL
Every sunrise is one day closer
Is Diabolo Technologies Ultradimm certified for VSAN?
I do not believe so – it certainly is not on the HCL yet.
However, I think it (and other memory channel products) would be a great technology to have certified with VSAN.
Hey Cormac, seen this?
Apparently I am not he only one that suffers from this issue …
Nope – had not seen it before. Let me ask a few colleagues.
See http://www.v-front.de/2014/03/alert-issue-with-esxi-55-u1-driver.html which explains it pretty good ….
OK – working on getting that driver removed from the rollup ISO. Watch this space 🙂
See http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2075171 Problem seems to be caused by the be2iscsi driver included with the ESXi 5.5 Update 1 Driver Rollup ISO and not the tera dimm driver
Comments are closed.