I was going to make this part 11 of my vSphere 5.5 Storage Enhancements series, but I thought that since this is such a major enhancement to storage in vSphere 5.5, I’d put a little more focus on it. vFRC, short for vSphere Flash Read Cache, is a mechanism whereby the read operations of your virtual machine are accelerated by using an SSD or a PCIe flash device to cache the disk blocks of the application running in the Guest OS of your virtual machine. Now, rather than going to magnetic disk to read a block of data, the data can be retrieved from a flash cache layer to improve performance and lower latency. This is commonly known as write-through cache, as opposed to write-back cache, where the write operation is acknowledged when the block of data enters the cache layer.
I’ve been presenting at a number of conferences over the past number of weeks/months, both internal and external. While a lot of my sessions have focused around Virtual SAN (VSAN), I got a number of questions around whether or not the new Software Defined Storage product from EMC, ViPR, competes with or complements Virtual SAN. Since ViPR 1.0 is now available (since September), and a new release of ViPR is due out before the end of the year, I thought I’d take a closer look at what ViPR is all about and try to answer that question.
Before I left for PTO, I wrote an article on a number of different storage vendors you should be checking out at this year’s VMworld 2013. One of these was a new start-up called PernixData. With tongue firmly in cheek, I suggested that PernixData might use VMworld as a launchpad of their FVP (Flash Virtual Platform) product. Well, needless to say, my good friend Satyam Vaghani, CTO at PernixData, reached out to me to say that they were in fact announcing FVP before VMworld. He shared some details with me, which I can now share with you if you haven’t heard about the announcement.
One of my favorite parts of VMworld is the Solutions Exchange. This is where you can catch up with new products and features from established storage vendors. Not only that, but you can also catch up with ‘new kids on the block’ and find out what cool, new ideas they bring to the field of storage and virtualization. This year is no different, with a couple of new storage products from established vendors, as well as some interesting new arrivals on the scene. This is by no means an exhaustive list. If you have some recommendations on some other items to check, please let me know. However, these are the ones I plan on making a bee-line too this year. I’ve tried to break them out into categories to make it a little easier to see who is doing what.
The answer right now is no, but if you are interested in how this query came about, and why I decided to blog about it, continue reading. It has something for those of you interested in some of the underlying workings of Storage DRS. Continue reading
I recently got hold of a copy of the new V-locity 4 product from Condusiv which was released last month. Condusiv is the new name for Diskeeper, whom you may have heard of before. I first came across them as a provider of software which specialized in optimizing I/O, primarily by preventing file fragmentation on NTFS in a Windows Guest OS. I blogged about them in the past on the vSphere Storage Blog after some discussions around defragmentation in the Guest OS. The new feature takes a portion of memory and uses it as a block cache. I did some preliminary tests with good ol’ IOmeter, and the initial results look quite good.
The advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This is of particular interest to users of NFS. If the number of mounts to an IP address is more than SunRPC.MaxConnPerIP, then the existing connections for NFS mounts are shared with new mounts from the same IP address. Currently VMware supports a maximum of 128 unique TCP connections per ESXi host but also supports up to 256 mounts per host. So what options are available to configure ESXi hosts to allow the maximum number of NFS mounts?