Is it just me, or does VMworld seem to come around quicker these days? Anyway, it is great to have a couple of sessions in again this year, and yes – you guessed it, these are VSAN sessions once again.
STO4572 – Successful Virtual SAN Evaluation/Proof-Of-Concepts
This is an update on last year’s VSAN Proof-Of-Concept talk. A lot has changed in the last year, and the idea of this session is to fill you in on all the potential gotchas that you might encounter when trying to evaluate VSAN. I’ll be co-presenting this with Julienne Pham of VMware who has built up a wealth of field experience on VSAN. We’ll cover everything you need to know, including how to conduct various failure scenarios, and get the best performance. Thinking about deploying VSAN? This is one not to miss.
Regular readers will know that I normally blog about the technical aspects of storage, as opposed to doing opinion pieces. However there have been a number of articles published recently questioning the value of VMware’s Virtual Volumes, commonly referred to as VVols. In general, the pieces I have read ask whether or not VVols (or to be more accurate, per-VM granularity feature of VVols) adds value when NFS is already doing per-VM granularity in the form of files. The point that was missed in these pieces is that VVols is so much more than per-VM granularity. I’ve just come back from some great VMUG events in Frankfurt, Germany and Warsaw, Poland where I presented on the value of VVols to our users. I therefore thought it opportune to post about the other benefits of virtual volumes. Continue reading →
I took the opportunity last week (while I was over in the Boston area) to catch up with Scott Davis. I’ve known Scott a long time, as he had various roles at VMware over a number of years. Scott is currently CTO at Infinio, a company that has developed an I/O acceleration product for virtual machines. The new version of Infinio Accelerator 2.0 released only a few weeks back, so I decided to reach out to Scott and find out about the enhancements that went into this new version.
Virtual SAN already has a number of features and extensions for performance monitoring and real-time diagnostics and troubleshooting. In particular, there is VSAN Observer, which is included as part of the Ruby vSphere Console (RVC). Another new feature is the Health Check Plugin, which was recently launched for VSAN 6.0. However, a lot of our VSAN customers are already using vRealize Operations Manager, and they have asked if this could be extended to VSAN, allowing them us to use a “single pane of glass” for their infrastructure monitoring. That’s just what we have done, and the beta for the vROps Management Pack for Virtual SAN is now open. You can sign up by clicking here.
Earlier this month, I shared a post about my experiences with deploying VIO, VMware integrated OpenStack. One of the issues I highlighted was the fact that when I tried to create a network, it failed with a very unhelpful error message. The reason the network creation failed was due to a limitation with using a distributed switch (VDS). Instead I had to create what was known as a “provider network”, which is a special step needed for VDS networking. I am in the midst of an OpenStack training, and I’m trying to relate what I am learning on the training class to my VIO deployment. What I’m finding is that there are a number of limitations when using VIO with a distributed switch, which is making it difficult to try out some of the concepts and lab exercises covered in the training class.
I hadn’t realized that we had now begun to use the LVM (Logical Volume Manager) in our vCenter Server Appliance (VCSA) version 6.0. Of course, I found out the hard way after a network outage in our lab brought down our VCSA which was running on NFS. On reboot, the VCSA complained about file system integrity as follows:
I’ll start this post by stating straight up that I am no OpenStack expert. Far from it. In fact, the only reason I started to play with VMware Integrated OpenStack (VIO) was to get up to speed for a forthcoming OpenStack class that I am taking in next week. What I’ve documented here is a bunch of issues I ran into during the VIO deployment process. Hopefully these will prove useful to some folks who are also new to OpenStack and plan on going through the same exercise. I’m not going to describe VIO in any detail, nor any of the OpenStack components such as Nova, Cinder, Glance, etc. (this is all just a quick google away) but suffice to say that what VMware has done is to bundle OpenStack into a bunch of VMs responsible for running different services, and allows you to deploy and stand-up an OpenStack configuration very quickly indeed.
The process to deploy VIO is two-fold:
Deploy the VIO OVA, which deploys a management plugin on the vSphere web client, as well as the OpenStack template. This could be deployed on the same cluster as your VIO management cluster, or it could be deployed to a dedicated management cluster.
Provision OpenStack on top of vSphere. This would be deployed on a dedicated cluster of ESXi hosts and becomes your VIO management cluster
After the completion of step 2, you are now ready to provision VMs. The following are some issues that I encountered when rolling out my VIO deployment. They may help you if you have to go through the same exercise.