vSphere 5.5, RDMs and Microsoft Clustering

vsphere5.5bI was having a conversation with one of our tech support guys (Greg Williams) recently about the relaxation on the requirement to allow Raw Device Mappings (RDMs) to be presented to different hosts using different SCSI identifiers and still do vMotion operations in vSphere 5.5. You can read that post here where I described how the restriction has been relaxed. Greg mentioned that he was handling a case where customers wished to share a physical mode/passthru  RDM between VMs on different ESXi hosts with a view to running Microsoft Clustering Services (MSCS) on top. We call this CAB or Clustering Across Boxes.

MSCS is probably the only real use case for passthru RDMs these days. We need to use passthru RDMs in MSCS as these allow all SCSI commands to be passed all the way down to the disks. In the case of MSCS, it uses SCSI reservations on disks to gain quorum/ownership/control of services in the event of a failure/fail-over. If we didn’t use these passthru RDMs and used VMDKs or virtual mode/non-passthru RDMs instead, the SCSI reservation would get translated to a file lock. This would be okay if we were running MSCS using VMs on the same ESXi host (we call this a CIB configuration, short for Clustering in a Box) but it wouldn’t allow MSCS to work correctly across ESXi hosts.

We wondered if this lifting of restrictions on the LUN ID for vMotion would also allow us to present the RDM to different ESXi hosts with different IDs, and still allow virtual machines on those hosts to share the RDM and configure MSCS. Well, to cut to the chase, that restriction is still in place. If the RDM is mapped using different identifiers to different hosts, you will not be able to share that RDM to the VMs that you want to have running MSCS.

It would appear that vCenter continues to check for an identical VML (VMware Legacy) identifier when sharing a RDM to multiple VMs on different ESXi hosts. If the VML ID is different (which it will be if a different SCSI ID is used to present the RDM to different ESXi hosts), then you will not be able to configure MSCS.

Bottom line: if you wish to use Microsoft Clustering Services with vSphere 5.5, you must continue to follow the practice of presenting the passthru RDM/LUN with the same SCSI ID to all ESXi hosts that will have virtual machines sharing the RDM. This is true even in vSphere 5.5, where we have relaxed some RDM restrictions to enable vMotion of VMs across ESXi hosts where the RDM is presented with a different ID.

20 Replies to “vSphere 5.5, RDMs and Microsoft Clustering”

  1. Even if we had to use the same IDs on different hosts, the ability to vMotion a guest that is using a physical passthrough RDM that is shared between two different VMs would be AMAZING! Fingers crossed for the future…

    1. Hey Tim,

      So we can vMotion VMs with RDMs and we can also Storage vMotion VMs with RDMs but this only moves the mapping file.

      I’m guessing you are talking about Storage vMotion and the ability to migrate the actual RDM contents to a new disk on the same host (or indeed another host).

      I wrote about this on the vSphere Storage block here – http://blogs.vmware.com/vsphere/2012/02/migrating-rdms-and-a-question-for-rdm-users.html – and I also followed up with a post on why this is so difficult to implement – http://blogs.vmware.com/vsphere/2013/07/the-difficulty-in-implementing-ptrdm-to-ptrdm-migrations.html

      But who knows – there are some very clever people at VMware. Perhaps they’ll find a way to do this at some point.

      1. I was actually referring to vMotioning VMs that are engaged in SCSI bus sharing…

        Scenario:
        VM “A” has a physical passthrough LUN assigned to it for MSCS. The virtual SCSI controller’s bus sharing is set to Physical mode.

        VM “B” has the same physical passthrough LUN assigned to it. The virtual SCSI controller’s bus sharing is also set to physical.

        As far as I know, isn’t this the only way to share a physical RDM LUN between two different VMs for MSCS (at least at the hypervisor level)?

        1. Ah – I see. That scenario I have not tried in vSphere 5.5. Leave it with me. I’ll try to get an opportunity to test it over the coming weeks.

          1. I have some VMs set up in this configuration in my 5.5 environment. vMotion simply states that VMs engaged in SCSI bus sharing are not compatible.

            I was wondering if an alternative method would work though… Perhaps using the multi-writer flag? I don’t think that would support the necessary SCSI commands though.

            I know that for nesting ESXi on top of VSAN you need to enable this hack: esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

            Would be nice if something similar existed to help with this use case.

          1. Hey Tim,
            what kind of application are you using in your VMs with the enabled SCSI Bus Sharing?
            The multi-writer flag works perfectly to vMotion VMs sharing disks (in this case, VMDKs). We migrated from shared RDMs to multi-writer-VMDKs exactly for this reason. In our environment, these kind of VMs are running Oracle DBs.
            Dont know if this is also supported for other applications.
            Check out these links:
            http://blogs.vmware.com/apps/2013/10/setting-multi-writer-flag-for-oracle-rac-on-vsphere-without-any-downtime.html
            http://kb.vmware.com/kb/1034165

            BR
            Steffen

          2. Using MS failover clusters is my primary use case. Especially with setting up MSSQL using shared storage. I *believe* that the MSCS requires SCSI locking / reservations in order to function properly. I have tried using the multi-writer flag with a MSCS setup and it failed the pre-deployment validation testing.

            Definitive information on the multi-writer flag is hard to come by. We have also had success in deploying it for use with Oracle RAC.

            Maybe related… It is interesting to me that when trying to deploy VSAN in a nested ESXi environment enabling the fake SCSI reservations enables VSAN to function. Makes me wonder if a similar setup would suffice for MSCS?

  2. I’m curious what your experience has been with the performance of physical RDMs vs. VMFS/.VMDK when testing with low latency 700k IOPs? In some of my recent testing I was able to drive significantly more IOPs (more than 30%) by utilizing RDM disks. There seemed to be a clear benefit to RDM with latency at those low levels, just curious of your thoughts.

    1. This is not a topic I have looked at in some time Rawley.

      I am under the impression that for the most part that, their performance should be ‘more or less’ the same. But the most recent study (IIRC) was done back in the ESX 3.5 days, and I’m not sure if anyone has done a newer study since then.

      1. Interesting. This is probably a corner case that many customers have not ran into given the number of IOPs and latency I was testing. For reference the specific testing I’m referring to was 4K 100% RR, The characteristics of the storage system is 130us @ 1.1M IOPs, and I was able to achieve 1.02M @ 100us with an ESXi 5.5 host. Those numbers were RDM based, a like VMFS configuration (same # VMs, LUNs, etc.) was about 30% less max IOPs. I didn’t spend a lot of extra time diagnosing as that wasn’t my goal this time around, but I just wondered if you had any insight.

        My assumption was the added latency (albeit small) of VMFS is irrelevant with storage latencies from 1ms-10ms, but when those latencies are .1ms or less the VMFS latency has more of an impact.

        Thanks for the reply as I know this was off-topic of the thread.

  3. Quite annoying legacy checks is restricting the relaxations.

    RDMs can be different SCSI ID on different esxi hosts as along as they are not used for Microsoft Clustering Services is that correct.

  4. “…MSCS is probably the only real use case for passthru RDMs these days…”

    What about Mapping an SQL DB to a LUN directly for performance reasons?

    1. Do you see that much of a performance increase? To my knowledge, most people these days run their SQL DBs on VMDKs.

  5. Great post! I think it’s also worth mentioning that in a CAB MSFC design with 5.5 the round robin PSP is now supported. kb.vmware.com/kb/2052238

Comments are closed.