Do RDMs still rely on LUN ID?

I had an interesting question the other day about whether Raw Device Mappings (aka RDMs) still had a reliance on the LUN ID, especially when it comes to the vMotion of Virtual Machines which have RDMs attached. I remember some time back that we introduced a concept called Dynamic Name Resolution for RDMs, which meant that we no longer relied on a consistent HBA number or even the path to identify the RDM, but do we still use the LUN ID in vSphere 5.1?

To actually find a reference to the requirement to keep the LUN ID consistent across hosts I had to go back to the Fibre Channel SAN Configuration Guide which we shipped with ESX 4.1. In it, it explicitly states “To use RDMs successfully, a given LUN must be presented with the same LUN ID to every ESX/ESXi host in the cluster.” However this guideline only appeared under the EMC & IBM sections of the guide. And I couldn’t find anything in the 5.x documentation.

To make sure that nothing changed around this in vSphere 5.1, I did a bit of investigation.I used my NetApp array to present a LUN to two of my ESXi hosts, but used a different LUN ID for each presentation (ID 40 to one host and ID 50 to another host):

NetApp FilerView LUN Presentation

I scanned my SAN, and I could see the LUN on each host, but with different LUN IDs.

The VMFS volume on which my VM was deployed was shared to both hosts. I then proceeded to add the Raw Device Mapping to the Virtual Machine. The VM was on host 1, so the RDM had a LUN ID of 50. Then I looked in the RDM meta data file but there was nothing directly in there which references the LUN ID of the RDM. Although the UUID does look suspiciously like part of an NAA ID (another SCSI identifier mechanism), there is definitely no LUN ID reference.

I next used the vmkfstools -q command to look at the mapping:

# vmkfstools -q WinXP-Lite_1.vmdk
Disk WinXP-Lite_1.vmdk is a Non-passthrough Raw Device Mapping
Maps to: vml.020032000060a98000572d54714e346d63444b44744c554e202020

So the RDM maps to a very long VML number. But what is the VML, and how is it generated? VML which is short for VMware Legacy. This is using a combination of Controller, Target, Channel & Lun information, as well as SCSI id & vendor specific info to identify the LUN. We can parse up the VML as follows:

  • CTL info – 0200320000 (the 32 here is hex for LUN ID 50 – my RDM)
  • NAA id – 60a98000572d54714e346d63444b4474
  • Vendor - 4c554e202020 (HEX -> ASCII converts this to ‘LUN’ on NetApp; it differs from array to array)

So, yes, even though the metadata file itself does not have a LUN id reference, it seems that because we are using the VML as a mapping reference which  still a reliance on LUN ID.

I now wanted to see the effect this would have on a vMotion operation, so I tried to migrate my Virtual Machine to the other host which had the RDM presented as LUN ID 40 instead of 50. The vMotion operation failed the compatibility check as follows:

Virtual disk is a mapped direct-access LUN that is not accessible.

And just for kicks, I searched for that error. First hit was KB 1016210. And inside in this KB (which elaborates on the VML layout), you will find the following statement:

To resolve this issue, LUN presentation should be made consistent for every host participating in a cluster that could run the virtual machine, the raw device mapping metadata file should be consistent with that presentation, and vCenter Server’s cache of this information should be accurate.

I think that’s pretty conclusive, don’t you? To finish, I went back to the array and had it present the LUN to all hosts with a matching LUN ID. I was then able to successfully vMotion the Virtual Machine with an RDM between ESXi hosts.

Bottom line – yes, RDMs still have a reliance on LUN IDs matching across all hosts, even in vSphere 5.1. However in vSphere 5.5, this restriction was lifted. Read more here.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @CormacJHogan

15 thoughts on “Do RDMs still rely on LUN ID?

  1. Keeping a consistent LUN ID across ESX hosts is a useful housekeeping best practice anyway, regardless of whether it’s required or not.
    That’s an interesting dive into the nuts and bolts of it, thanks Cormac.

      • Outside of RDMs is there any other reason these must be consistent? With a boot from SAN scenairo using EMC storage groups and adding all of the shared LUNs to each Hosts storage group it becomes difficult to manage Host IDs and LUNs come and go in a large cluster envionrment.

        • I think this depends on the BIOS of the adapter. In my experience, these still require the ID of the LUN that you are going to use for boot-from-SAN.

  2. When we first invented RDMs, it used to be the case that the LUN number was an integral part of the vml name. As a side effect, changed in LUN# would change the vml name, which explains the situation you are seeing. Going forward, ESX moved to naa based names, but RDMs continued to rely on vml names to maintain VMFS disk format compatibility with older versions of ESX(i).

    Now heres a trick that might let you use RDMs, even for LUNs whose LUN numbers differ on different ESX hosts: in vSphere Client, click on your host, go to the configuration tab, click on ‘Advanced Settings’ under ‘Software’, and set Scsi -> Scsi.CompareLUNNumber to 0.

    Do it preferably on all hosts in your cluster, and repeat your experiment and watch the VM migrate successfully between the hosts with different LUN numbers.

      • We’ve run into issues with VML ID mismatches even though we use provisioning practices that ensure LUN ID consistency across cluster nodes. I was unaware of this setting, but will consider using this setting next time we run into the issue. It is certainly better than our typical fix, which is to power the machine off, delete all RDMs, re-add them, and power the host back up.

        Does anyone know why this setting is even there if it is possible to disable? In other words, what are the downsides to leaving this set to zero?? If no one knows I’ll open a case…

  3. Cormac, is it recommended to group all the ESX hosts into a single initiator group? That way, there is less room for the admin to have LUN ID inconsistency? We typically recommend single initiator/target zoning to reduce effect on RSCN – but for the array side, it’s probably okay to group them all together right?

    • Wen – if I may jump in here…

      I have had occasion in the past to remove the mapping of a subset of the presented LUNs from one host in the cluster. You lose that ability when you group all of your hosts together in a single initiator group on the storage array. That is a rare requirement, but something to consider. In fact, I have worked in some environments where it was required to have a separate initiator group for each HBA in each host. That way, when removing a LUN, we could see half the paths drop before removing the remaining paths. That helps to prevent removing the wrong LUN. You have to weigh the benefits (ease of use and automatically consistent LUN IDs) against the loss of flexibility.

  4. @Satyam
    Scsi.CompareLun won’t work in this use case as Virtual Center won’t be able to filter out the lun number bits when Scsi.CompareLunNumber = 0 is set
    I’ve verified this is the case on latest version of Virtual Center

    so KB still applies for current releases of ESX

    Just one small correction here on the lun identification
    CTL info – 0200320000
    the first two bytes determine how the device is addressed i.e. 0200 means NAA addressing is used by the array to advertise it’s lun on the san

    NAA or Network Address Authority, is a naming format defined by the INCITS T11 for Fibre Channel protocols see RFC 3980

    I think naa format can be in a 64 or 128 bit identifier e.g. naa.52004567BA64678D

    so basically when we generate the VML the ESX scsi midlayer can detect how the device addressed by interrogating some info from the device itself from an INQ request ( Pg 0×83)

    Now , and I don’t think modern arrays use SERIALNUM identification anymore, but if there are some still out there then we will detect the device’s SERIAL number from the INQ payload

    Consequently the vml will probably start with vml.010032… or vml.030032..

  5. Nice article Cormac! I recently discovered a host LUNID mismatch and was curious about how potentially harmful it might be. As it’s been mentioned above, host LUNID matching has been a best practice with ESX for a long time, but I’ve never known exactly why.

    Another scenario for separate storage groups is in a boot from SAN configuration.