A closer look at Infinio Accelerator 2.0

infinioI took the opportunity last week (while I was over in the Boston area) to catch up with Scott Davis. I’ve known Scott a long time, as he had various roles at VMware over a number of years. Scott is currently CTO at Infinio, a company that has developed an I/O acceleration product for virtual machines. The new version of Infinio Accelerator 2.0 released only a few weeks back, so I decided to reach out to Scott and find out about the enhancements that went into this new version.

Continue reading

vROps Management Pack for Virtual SAN

vrops-badgeVirtual SAN already has a number of features and extensions for performance monitoring and real-time diagnostics and troubleshooting. In particular, there is VSAN Observer, which is included as part of the Ruby vSphere Console (RVC). Another new feature is the Health Check Plugin, which was recently launched for VSAN 6.0. However, a lot of our VSAN customers are already using vRealize Operations Manager, and they have asked if this could be extended to VSAN, allowing them us to use a “single pane of glass” for their infrastructure monitoring. That’s just what we have done, and the beta for the vROps Management Pack for Virtual SAN is now open. You can sign up by clicking here.

Continue reading

VIO limitations with VDS networking

openstack Earlier this month, I shared a post about my experiences with deploying VIO, VMware integrated OpenStack. One of the issues I highlighted was the fact that when I tried to create a network, it failed with a very unhelpful error message. The reason the network creation failed was due to a limitation with using a distributed switch (VDS). Instead I had to create what was known as a “provider network”, which is a special step needed for VDS networking. I am in the midst of an OpenStack training, and I’m trying to relate what I am learning on the training class to my VIO deployment. What I’m finding is that there are a number of limitations when using VIO with a distributed switch, which is making it difficult to try out some of the concepts and lab exercises covered in the training class.

Continue reading

fsck of vCenter Server Appliance 6.0 partitions

I hadn’t realized that we had now begun to use the LVM (Logical Volume Manager) in our vCenter Server Appliance (VCSA) version 6.0. Of course, I found out the hard way after a network outage in our lab brought down our VCSA which was running on NFS. On reboot, the VCSA complained about file system integrity as follows:

vcsa-fsck Continue reading

Adventures in VIO, VMware Integrated Openstack

openstackI’ll start this post by stating straight up that I am no OpenStack expert. Far from it. In fact, the only reason I started to play with VMware Integrated OpenStack (VIO) was to get up to speed for a forthcoming OpenStack class that I am taking in next week. What I’ve documented here is a bunch of issues I ran into during the VIO deployment process. Hopefully these will prove useful to some folks who are also new to OpenStack and plan on going through the same exercise. I’m not going to describe VIO in any detail, nor any of the OpenStack components such as Nova, Cinder, Glance, etc. (this is all just a quick google away) but suffice to say that what VMware has done is to bundle OpenStack into a bunch of VMs responsible for running different services, and allows you to deploy and stand-up an OpenStack configuration very quickly indeed.

The process to deploy VIO is two-fold:

  1. Deploy the VIO OVA, which deploys a management plugin on the vSphere web client, as well as the OpenStack template. This could be deployed on the same cluster as your VIO management cluster, or it could be deployed to a dedicated management cluster.
  2. Provision OpenStack on top of vSphere. This would be deployed on a dedicated cluster of ESXi hosts and becomes your VIO management cluster

After the completion of step 2, you are now ready to provision VMs. The following are some issues that I encountered when rolling out my VIO deployment. They may help you if you have to go through the same exercise.

Continue reading

Virtual SAN – free 6 month trial – thank you VMUG

vsan-vmware-virtual-san-boxA short post again today. For those of you who are considering evaluating Virtual SAN, our friends over at the VMware User Group (VMUG) are giving you the opportunity to trial VSAN for 6 months. This offer is only available to VMUG members, but joining VMUG is free. And really, if you are not already a member of your local VMUG, shame on you. This is a great way to get hands-on experience with VSAN. What are you waiting for? Click here to get your six month trial of VSAN.

VMUGOn the topic of VMUGs, I will be presenting on VMware’s Software Defined Storage initiatives at the Germany/Deutschland West VMUG User Conference in Frankfurt on June 17th, and again at the Poland/Polska VMUG in Warsaw on June 18th. Please sign up and support your local VMUG. If you are attending either of these VMUGs, I look forward to seeing you there.

Heads Up! Horizon View 6.1 and AF-VSAN Policies

I had a query recently from a partner who was deploying VMware Horizon View 6.1 on top of an all-flash VSAN 6.0. They had done all the due diligence with configuring the AF-VSAN appropriately, marking certain flash devices as capacity devices, and so on. The configuration looked something like this:

config.jpgThe they went ahead and deployed Horizon View 6.1, which they had done many times before on hybrid configurations. They were able to successfully deploy full clone pools on the AF-VSAN, but hit a strange issue when deploying linked clone pools (floating/dedicated). The clone virtual machine operation would fail with an “Insufficient disk space on datastore” error, similar to the following:

no-space-error.jpg Continue reading