Setting up a Microsoft cluster on VMware used to be a fairly straight forward task with a very minimal set of considerations. Over time, the support documentation has evolved into something that looks like it was written by the U.S Internal Revenue Service. I was an Accountant in my previous life and I remember Alternative Minimum Tax code that was easier to follow than what we have today, a 50 page .PDF representing VMware’s requirements for MSCS support. Even with that, I’m not sure Microsoft supports MSCS on VMware. The Microsoft SVVP program supports explicit versions and configurations of Windows 2000/2003/2008 on ESX 3.5 update 2 and 3, and ESXi 3.5 update 3 but no mention is made regarding clustering. I could not find a definitive answer on the Microsoft SVVP program site other than the following disclaimer:
For more information about Microsoft’s policies for supporting software running in non-Microsoft hardware virtualization software please refer to http://support.microsoft.com/?kbid=897615. In addition, refer to http://support.microsoft.com/kb/957006/ to find more information about Microsoft’s support policies for its applications running in virtual environments.
At any rate, here are some highlights of MSCS setup on VMware Virtual Infrastructure, and by the way, all of this information is fair game for the VMware VCP exam.
Prerequisites for Cluster in a Box
To set up a cluster in a box, you must have:
* ESX Server host, one of the following:
* ESX Server 3 – An ESX Server host with a physical network adapter for the
service console. If the clustered virtual machines need to connect with external
hosts, then an additional network adapter is highly recommended.
* ESX Server 3i – An ESX Server host with a physical network adapter for the
VMkernel. If the clustered virtual machines need to connect with external
hosts, a separate network adapter is recommended.
* A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you
need an FC HBA (QLogic or Emulex).
You can set up shared storage for a cluster in a box either by using a virtual disk or by
using a remote raw device mapping (RDM) LUN in virtual compatibility mode
(non‐pass‐through RDM).
When you set up the virtual machine, you need to configure:
* Two virtual network adapters.
* A hard disk that is shared between the two virtual machines (quorum disk).
* Optionally, additional hard disks for data that are shared between the two virtual
machines if your setup requires it. When you create hard disks, as described in this
document, the system creates the associated virtual SCSI controllers.
Prerequisites for Clustering Across Boxes
The prerequisites for clustering across boxes are similar to those for cluster in a box.
You must have:
* ESX Server host. VMware recommends three network adapters per host for public
network connections. The minimum configuration is:
* ESX Server 3 – An ESX Server host configured with at least two physical
network adapters dedicated to the cluster, one for the public and one for the
private network, and one network adapter dedicated to the service console.
* ESX Server 3i – An ESX Server host configured with at least two physical
network adapters dedicated to the cluster, one for the public and one for the
private network, and one network adapter dedicated to the VMkernel.
* Shared storage must be on an FC SAN.
* You must use an RDM in physical or virtual compatibility mode (pass‐through
RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage.
Prerequisites for Standby Host Clustering
The prerequisites for standby host clustering are similar to those for clustering across
boxes. You must have:
* ESX Server host. VMware recommends three network adapters per host for public
network connections. The minimum configuration is:
* ESX Server 3 – An ESX Server host configured with at least two physical
network adapters dedicated to the cluster, one for the public and one for the
private network, and one network adapter dedicated to the service console.
* ESX Server 3i – An ESX Server host configured with at least two physical
network adapters dedicated to the cluster, one for the public and one for the
private network, and one network adapter dedicated to the VMkernel.
* You must use RDMs in physical compatibility mode (pass‐through RDM).
You cannot use virtual disk or RDM in virtual compatibility mode
(non‐pass‐through RDM) for shared storage.
* You cannot have multiple paths from the ESX Server host to the storage.
* Running third‐party multipathing software is not supported. Because of this
limitation, VMware strongly recommends that there only be a single physical path
from the native Windows host to the storage array in a configuration of
standby‐host clustering with a native Windows host. The ESX Server host
automatically uses native ESX Server multipathing, which can result in multiple
paths to shared storage.
* Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the
physical Windows machine.
Cluster in a Box | Cluster Across Boxes | Standby Host Clustering | |
Virtual disks | Yes | No | No |
Pass-through RDM (physical compatibility mode) | No | Yes | Yes |
Non-pass-through RDM (virtual compatibility mode) | Yes | Yes | No |
Caveats, Restrictions, and Recommendations
This section summarizes caveats, restrictions, and recommendation for using MSCS in
a VMware Infrastructure environment.
* VMware only supports third‐party cluster software that is specifically listed as
supported in the hardware compatibility guides. For latest updates to VMware
support for Microsoft operating system versions for MSCS, or for any other
hardware‐specific support information, see the Storage/SAN Compatibility Guide for
ESX Server 3.5 and ESX Server 3i.
* Each virtual machine has five PCI slots available by default. A cluster uses four of
these slots (two network adapters and two SCSI host bus adapters), leaving one
PCI slot for a third network adapter (or other device), if needed.
* VMware virtual machines currently emulate only SCSI‐2 reservations and do not
support applications using SCSI‐3 persistent reservations.
* Use LSILogic virtual SCSI adapter.
* Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4.
VMware recommends Windows Server 2003.
* Use two‐node clustering.
* Clustering is not supported on iSCSI or NFS disks.
* NIC teaming is not supported with clustering.
* The boot disk of the ESX Server host should be on local storage.
* Mixed HBA environments (QLogic and Emulex) on the same host are not
supported.
* Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not
supported.
* Clustered virtual machines cannot be part of VMware clusters (DRS or HA).
* You cannot use migration with VMotion on virtual machines that run cluster
software.
* Set the I/O time‐out to 60 seconds or more by modifying
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\
TimeOutValue.
The system might reset this I/O time‐out value if you recreate a cluster. You must
reset the value in that case.
* Use the eagerzeroedthick format when you create disks for clustered virtual
machines. By default, the VI Client or vmkfstools create disks in zeroedthick
format. You can convert a disk to eagerzeroedthick format by importing,
cloning, or inflating the disk. Disks deployed from a template are also in
eagerzeroedthick format.
* Add disks before networking, as explained in the VMware Knowledge Base article
at http://kb.vmware.com/kb/1513.
phew!
Hi experts,
Can anybody confirm about vmware supporting 4 node clustering in VMWare environment with FC luns ?
-Durjay
Have you ever considered writing an e-book or guest authoring on other websites?
I have a blog based on the same ideas you discuss and would really like to have
you share some stories/information. I know my audience would
value your work. If you are even remotely interested, feel free to send
me an email.