There was a recent discussion on the forums around the supportability of quad port NICs when deploying the vSphere Storage Appliance. There is an error thrown by the installer when the ESXi host only has a quad port NIC. The error states that the VSA installer ‘Failed to configure network on host’ because it ‘Could not find 2 NICs on the system’. However there is a workaround to allow VSA to install when the ESXi host(s) only contain a single quad port NIC. This is only available on VSA 5.1.x.
The reasons for not using a quad port NIC on the VSA should be obvious. This is the response I gave on the forum:
There are two networks for the VSA, the front end for VSA management and NFS presentation and a backend for cluster communication and replication. Because we didn’t want a single NIC failure to bring down the whole of the node, the requirement is to team each of the front end and back-end networks. It was a design choice.
This is why the installer won’t let you proceed with a single quad port NIC – a NIC failure in this case (even with teaming) will bring down the whole node. So ideally, one would have two dual port NIC (or four single port NICs if you wish).
That way the VSA node can continue to function and present shared storage to your virtual infrastructure, even if a single NIC fails.
The same design choice was used when requiring a RAID level for disks on each host – we didn’t want a single spindle failure bringing down a complete node.
However, from a support perspective, it has been decided that VMware will support the use of quad port NICs with the VSA. There is a workaround that you can use for VSA 5.1 (or 5.1.1) which leverages some of the new brownfield deployment functionality. It basically requires you to preconfigure the network of each of the ESXi hosts in advance of installing the VSA, setting up the front-end and back-end networks manually. This means that the quad port NIC check is bypassed and the install will proceed. There is some work involved, so make sure that you setup correctly.
Step 1 – Two vSwitches must exist – vSwitch0 & vSwitch1. In all likelihood, vSwitch0 will already exists, so only vSwitch1 must be created.
Step 2 – vSwitch0 must contain 2 uplinks, e.g. vmnic0 & vmnic2 (both must be active, and teaming must be set to originating port id)
Step 3 – vSwitch1 must contain the other 2 uplinks, e.g. vmnic1 & vmnic3 (both must be active, and teaming must also be set to originating port id)
Step 4 -vSwitch0 should already contain a VM network (virtual machine) and a vmkernel network (management). Each of these needs to be modified so that vmnic2 is the active uplink and vmnic0 is the standby uplink.
Step 5 – Also on vSwitch0, a new virtual machine portgroup must be created. It should be called VSA-Front End (it must have that exact spelling). It should be configured so that vmnic0 is the active uplink and vmnic2 is the standby uplink (the reverse configuration of the virtual machine and management ports). If using VLANs, a VLAN id needs to be associated. That completes the setup on vSwitch0.
Step 6 – Now we move to vSwitch1. Create a new virtual machine portgroup called VSA-Back End (exact spelling). It should be configured so that vmnic1 is the active uplink and vmnic3 is the standby uplink. If using VLANs, a VLAN id should be associated.
Step 7 – Create a vmkernel port on vSwitch1 called VSA-VMotion (exact spelling). It should be configured so that vmnic3 is the active uplink and vmnic1 is the standby uplink (the reverse configuration of the VSA-Back End portgroup). If using VLANs, this portgroup must share the same VLAN as the VSA-Front End network. That completes the setup of vSwitch1.
The completed network configuration should look something like that.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan