PernixData FVP 1.0 is GA

Before I left for PTO, I wrote an article on a number of different storage vendors you should be checking out at this year’s VMworld 2013. One of these was a new start-up called PernixData. With tongue firmly in cheek, I suggested that PernixData might use VMworld as a launchpad of their FVP (Flash Virtual Platform) product. Well, needless to say, my good friend Satyam Vaghani, CTO at PernixData, reached out to me to say that they were in fact announcing FVP before VMworld. He shared some details with me, which I can now share with you if you haven’t…

Storage vendors to check out at #VMworld 2013 (SFO)

One of my favorite parts of VMworld is the Solutions Exchange. This is where you can catch up with new products and features from established storage vendors. Not only that, but you can also catch up with ‘new kids on the block’ and find out what cool, new ideas they bring to the field of storage and virtualization. This year is no different, with a couple of new storage products from established vendors, as well as some interesting new arrivals on the scene. This is by no means an exhaustive list. If you have some recommendations on some other items…

Exporting the Storage Map in the vSphere UI to Visio

I just saw this, thought it was sort of cool, and wanted to share it with you. You may have need, from time to time, to export the Storage Map view in the vSphere client to a format which can be used in reports, etc. It appears that the Storage Map may be exported to a Windows EMF (Enhanced Metadata Format). This format can then be imported into Visio: Useful if you ever need to do that kind of thing.

Why is uploading files to VMFS so slow?

This is something which has come up numerous times, and behavior which many of you have observed. There seems to be some issue with uploading files to a VMFS datastore. In fact, in one example, we had someone report that it took 10 minutes to upload a Windows 7 ISO to an iSCSI datastore and less than 1 minute to upload the same ISO to an NFS datastore.  Both datastores were very healthy and fast, and both had running VMs on them.  There have been variations of this behavior reported before. This post will try to explain why.

SIOC and datastores spread across all spindles in the array

This is a query which has come up on numerous occasions in the past, especially in the comments section of a blog post on debunking SIOC myths on the vSphere Storage Blog. This post is to highlight some recommendations which should be implemented when you have a storage array which presents LUNs which are spread across all spindles, or indeed multiple LUNs all being backed by the same set of spindles from a particular aggregate or storage pool.

Getting started with vscsiStats

I have had a few occasions recently to start using vscsiStats. For those of you who may be unfamiliar, this is a great tool for virtual machine disk I/O workload characterization. Have you ever wondered about the most common I/O size generated by the Guest OS? What about the latency of those I/Os? What about checking to see the I/O generated by a Guest OS when it is in a so-called ‘idle’ state? vscsiStats can help with all of these queries, as well as providing some excellent troubleshooting options. The tool has been around since the ESX 3.5 days. This…