Partager via


Message Queuing in Server Clusters

Applies To: Windows Server 2008

Message Queuing in server clusters

The Message Queuing service can be implemented in a server cluster, which appears to network clients as a single, highly available system. Such clusters are used to increase reliability and fault tolerance through failover as well as to achieve load balancing. For these clusters, the Message Queuing service is installed on every node in the cluster. These instances of the Message Queuing service are needed to support Message Queuing resources in the virtual servers. In addition, although they are unaware of clustering, they are needed, even in a clustered system, to serve the many applications and system components that run outside the context of any virtual server and call Message Queuing APIs.

Cluster groups and virtual servers

A cluster group is a collection of cluster resources with the following characteristics:

  • Groups define the units of failover. That is, when one resource in a group fails and it is necessary to move the resource to an alternate node, all of the resources in the group are moved to the alternate node.

  • A group is always owned by one node at any point in time. Likewise, a resource is always owned by a single group. These relationships ensure that all of a group's members reside on the same node.

To access a network application or resource in a nonclustered environment, network clients must connect to a physical server (that is, a specific computer on the network identified by a unique network name and Internet Protocol (IP) address). If that server fails, access to the application or resource is impossible. The Failover Clustering feature in Server enables the creation of virtual servers. Unlike a physical server, a virtual server is not associated with a specific computer and can be failed over like a group. If the node hosting the virtual server fails, clients can still access its resources using the same server name. A virtual server is a group that contains a network name resource, IP address resources, other virtual servers, and other resources - including applications like Message Queuing, to be accessed by the clients of the virtual server.

When you form cluster groups for Message Queuing, you must create a virtual server. The Message Queuing resource is dependent on a Physical Disk resource, to store message and queue data, and a Network Name resource, so that remote clients can access it. This virtual server is created on each node of the cluster using standard cluster tools (Failover Clusters Manager is the standard GUI tool) or cluster-creating APIs. The first time that a Message Queuing resource on any virtual server is brought online, the cluster Network Name resource creates a pseudo-computer object, and the Message Queuing resource creates the Message Queuing objects under it in Active Directory Domain Services. Each virtual server functions similarly to a physical computer, so a Message Queuing server running in the context of a virtual server provides services similar to a Message Queuing server running on a physical computer. In particular, queues can be created on a virtual server, and messages can be sent to them. Such queues are addressed using the VirtualServerName\QueueName syntax.

For more information about creating virtual servers for Message Queuing using Failover Clusters Manager and on creating a DTC resource in a server cluster, see Installing Message Queuing in a Server Cluster.

Note that when you select the Physical Disk resource for the Message Queuing cluster group, Message Queuing allocates its storage in the MSMQ\STORAGE folder on the shared disk. After storage has been allocated, you cannot modify the folder location.

Message Queuing clients can communicate with a standard Message Queuing server running on a node of a server cluster, or if Message Queuing applications are cluster-aware, they can run on a Message Queuing server running in the context of a virtual server.

In the active/active model, which is supported by all versions of Message Queuing following MSMQ 1.0, there can be multiple Message Queuing virtual servers running (active) on a single node. Thus, in a simplified example, if there are two nodes in a server cluster and Message Queuing is installed on both of them and if these two nodes host a total of three virtual servers, any of these virtual servers can fail over to the other node.

In the event of a failover, the resources of a group on one node, initially the preferred node, are taken offline and then brought back online on another node. The data stored on the physical disk remains unchanged because only the ownership of the Physical Disk resource is transferred to the new node. Upon failback, the resources are moved back logically to the preferred node.

Because the Message Queuing Triggers service is cluster-aware and supports the active/active paradigm, when the Message Queuing service fails over to another node, the Triggers service will fail over along with it. When failover occurs, the trigger and rule definitions, which are stored in the Windows registry, can then propagate between cluster nodes along with the other Message Queuing keys. After failover, the Triggers service can thus continue to process the incoming messages in each monitored queue and invoke the applicable stand-alone executable or COM component according to the rules defined.

Virtual servers are managed using standard snap-ins. The Active Directory Users and Computers snap-in can be used to manage the Message Queuing resources within a virtual cluster just as it is used to manage Message Queuing computers in a network. You can also open the Computer Management snap-in from within a virtual server for local management of the Message Queuing resource. Note that the Computer Management snap-in must be started from the Manage MSMQ option that is available by right-clicking the clustered Message Queuing group in Failover Cluster Management.

Note

Manage MSMQ must be started from the cluster node that is currently hosting the clustered Message Queuing service.

In a server cluster, the Message Queuing servers installed on the nodes must all provide the same set of services to enable failover. The types of services provided by Message Queuing running in the context of a virtual server depend on the configuration of the Message Queuing servers installed on the host nodes. For example, if Message Queuing servers with routing services enabled are installed on the nodes of a server cluster, these services will also be available in the context of the virtual servers hosted on those nodes.

The Windows 2000 Client Support feature cannot run on a virtual server. However, computers requiring this component can query a remote Message Queuing server with this feature, running on a domain controller.

Note

The Windows 2000 Client Support feature has been removed from Message Queuing 5.0. To support message queuing on Windows 2000 down-level clients, at least one Windows Server 2003 or Windows Server 2008 domain controller with Windows 2000 Client Support feature must be configured in the domain.

IP addressing with multiple network interface cards on a cluster node

On the physical node of a cluster, there are usually two network interface cards with different IP addresses. One of these network interface cards is used only for internal cluster communication. To ensure that Message Queuing does not use the private IP address of an internal cluster Network Interface Card, which would result in messaging failure, Message Queuing maintains a list of all private IP addresses on a cluster node. When Message Queuing starts, it checks this list to make sure that the IP address of an internal network interface card is not selected for messaging. Alternatively, you can specify an IP address on the cluster node that must be used for messaging, instead of allowing Message Queuing to randomly select an IP address from those available.

The IP address used for messaging is derived by applying the following conditions:

  • On a virtual server, the virtual server's IP address defined for the IP address resource is used.

  • Else if the cluster node has only one IP address, that is used.

  • Else if a node has multiple IP addresses, the cluster API is used to make two lists, a first list of all private (not static) IP addresses, and a second list of all the node's IP addresses. These two lists are then compared, to select an address that is in the second list but not the first. An informational event is issued with the IP address chosen. If the only IP addresses available are those in the first list, one of those is used, and an event issued.

Note that after restarting a node computer, the Message Queuing service running on the node does not restart automatically, and you need to restart it manually. However, Message Queuing resources that were online do come back online automatically.

IP addressing for multicast messaging on a cluster node

On a cluster node with multiple network interface cards, a problem will arise when the computer tries to send multicast messages by means of these multiple network interface cards. In effect, Message Queuing will randomly choose a network interface card for sending multicast messages. If the private network interface card is chosen, multicast recipients will not receive the message. To work around this issue, you can specify the source IP address to be used for sending multicast messages. To do this, create the string registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSMQ\Parameters\MulticastBindIP, in the usual IP address format. After setting the registry, restart the Message Queuing service for the changes to take effect.

Warning

Incorrectly editing the registry may severely damage your system. It is recommended that you back up any valuable data on the computer before making changes to the registry.

Note that for multicast messaging on a cluster node, if no IP address is specified in the MulticastBindIP, then the IP address to be used will be derived from the procedure detailed previously in the section entitled IP addressing with multiple network interface cards on a cluster node.