A number of new enhancements around Microsoft Clustering Services (MSCS) have been introduced in vSphere 5.5. I wanted to cover those in this post as I know many of you continue to use MSCS for service availability in your vSphere environments.
Prior to the holidays, VMware released new versions of vCenter & ESXi on December 20th. There were new releases for both vSphere 5.0 & 5.1. In this post, I want to discuss release 5.0 Update 2. There were a number of notable fixes specific to storage which I wanted to highlight. I will follow-up with a look at storage enhancements in the new 5.1 release in a future post.
I get a lot of questions around how the vSphere APIs for Array Integration (VAAI) primitives compare from a protocol perspective. For instance, a common question is to describe the differences between the primitives for NAS storage arrays (NFS protocol) and the primitives for block storage arrays (Fibre Channel, iSCSI, Fibre Channel over Ethernet protocols). It is a valid question because, yes, there are significant differences and the purpose of this blog post is to detail them for you.
Let’s begin with a brief overview of VAAI. Its primary purpose is to offload certain I/O tasks to the storage array, freeing resources on the ESXi host to do other tasks. So what primitives are there? Here is a table listing the Block, NAS & Thin Provisioning primitives:
What are the main differences between the block and NAS primitives?
For those of you who have been following my new vSphere 5.1 storage features series of blog posts, in part 5 I called out that we have a new Boot from Software FCoE feature. The purpose of this post is to delve into a lot more detail about the Boot from Software FCoE mechanism.
Most of the initial configuration is done in the Option ROM of the NIC. Suitable NICs contain what is called either a FCoE Firmware Boot Table (FBFT) or a FCoE Boot Parameter Table (FBPT). For the purposes of this post, we’ll refer to it as the FBFT. This table allows the VMkernel access parameters set in the NIC for FCoE boot.
Install ESXi on an FCoE LUN
The following is the sequence of events one would go through to install an ESXi 5.1 image on a FCoE LUN which is accessed via a NIC with FCoE capabilities and using the Software FCoE driver found on ESXi 5.1.
- The Administrator inserts the ESXi CD Install media and boots the system.
- Administrator configures FCoE option ROM to set FCoE boot parameters (boot targets, boot LUN, VLAN id, boot order…)
- This loads FCoE Boot Firmware Table (FBFT) in memory between 512KB – 1024KB. (The reason for this is to expose information about the FCoE connection so that the ESXi setup is able to determine that the attached device is bootable)
- System starts and load system BIOS and all enabled options ROMs (including FCoE boot option ROM)
- System boots from ESXi CD installation media
- ESXi boot loader loads and starts executing
- ESXi boot loader searches “FBFT” data in memory between 512KB – 1024KB. If found, verifies the validity and reserves this piece of memory.
- ESXi starts executing init scripts.
- One of init scripts checks if there is FCoE capable NICs available in system.
- If yes, the init script loads VMkernel module vmkfbft.
- The vmkfbft module exports the “FBFT” structures to user space via VSI (VMkernel System Information). This includes information such as ‘Is FCoE boot is enabled?’, ‘Which LUN is the boot LUN?’, ‘What is the VLAN ID for FCoE discovery?’.
- If FCoE boot is enabled, the same init script then starts FCoE discovery on the VLAN specified for boot.
- Based on the adapter type, the init script can create a standard virtual switch “VMware_FCoE_vSwitch” and add FCoE capable NICs to this vSwitch. Some adapters will not need this step.
- The discovered FCoE LUNs are registered in the PSA.
- The ESXi installer then gets the list of installable LUNs (including FCoE LUNs).
- The administrator installing the host is given a list of LUNs to choose for the ESXi installation.
- The administrator chooses which FCoE LUN to install and install process continues and copies the ESXi install image on to the LUN. Note that the FCoE LUN chosen for installation has to be the one specified as the boot LUN in FCoE option ROM.
Booting ESXi from a FCoE LUN
- The first thing to do is to change the Boot Controller/device order so that the FCoE boot LUN is the first bootable device in boot order.
- The system then starts, and load system BIOS and all enabled options ROMs (including FCoE boot option ROM).
- FCoE option ROM loads the installed ESXi image from FCoE boot LUN.
- ESXi starts executing.
- ESXi boot loader loads and starts executing.
- ESXi boot loader searches “FBFT” data in memory between 512KB – 1024KB.
- If found, verifies the validity and reserves this piece of memory.
- ESXi starts executing init scripts
- One of init scripts checks if there is FCoE capable NICs available in system. If there is, the init script loads VMkernel module vmkfbft.
- The init script checks ESXi configuration if FCoE is enabled. If yes, the init script starts FCoE discovery, using the VLAN ID.
- The discovered FCoE LUNs are registered to PSA.
- ESXi continues booting until it’s fully up and ready.
Just like the boot from Software iSCSI feature, the boot from Software FCoE feature will assist many diskless blade servers boot from SAN without the need to have expensive HBAs or CNAs.
Why is there a FBFT & a FBPT? FBFT is FCoE Boot Firmware Table. This contains networking and FCoE details which we set in the FCoE Option ROM of the NIC before we commence the installation. It is Intel proprietary. FBPT is the FCoE Boot Parameter Table. It does the same role as the FBFT but it is VMware proprietary, and we share it with other NIC vendors who want to do boot from Software FCoE.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage