In this topic you will deploy a 2-Node Microsoft Exchange 2007 SP1 Mailbox Server Cluster configured for cluster continuous replication (CCR). This configuration allows for increased availability by using replication in an active/passive cluster.

Tasks

  1. Prepare Each Node of the Cluster
  2. Cluster Network Configuration
  3. Create a New Cluster
  4. Configure the File Share Witness
  5. Ensure that All Cluster Nodes Are Online
  6. Install the Mailbox Server Role on the Active Cluster Node
  7. Configure Storage and Volume Mount Points on the Active Cluster Node
  8. Configure Storage and Volume Mount Points on the Passive Cluster Node
  9. Create New Storage Groups and Databases on the CCR Cluster
  10. Install the Mailbox Server Role on the Passive Cluster Node
  11. Verify the Ability to Move a Clustered Mailbox Server between the Nodes in the Cluster

Prerequisites

You should have several unpartitioned disk volumes (or LUNs) attached to each cluster node. These volumes will be used for Storage Groups, Mail Databases, and transaction logs.

Prepare Each Node of the Cluster

Perform the following actions on EXMBX01-NODE1 and EXMBX01-NODE2.

Procedure W03-DWHE.14: To install prerequisites on each node of the Active/Passive Cluster

  1. Install Windows Server 2003 R2 Enterprise Edition (x64) with SP2.

  2. Install IIS.

  3. Install the Microsoft .NET Framework 2.0 with SP1.

  4. Install the Windows Server 2003 Support Tools.

  5. Join the Fabrikam domain.

  6. Enable the ASP.NET 2.0 Web service extensions in Internet Information Services Manager.

  7. Install Microsoft PowerShell 1.0.

Procedure W03-DWHE.15: To create the cluster service account

  1. On AD01, run Active Directory Users and Computers.

  2. In the Users organizational unit (OU) of the Fabrikam domain, create a new user named ClusterAdmin. Set the password on this account to never expire.

  3. Add the ClusterAdmin account as a member of the Windows-based Hosting Service Accounts group.

Cluster Network Configuration

You must have a sufficient number of static IP addresses available when you create clustered mailbox servers in a two-node CCR configuration. IP addresses are required for both the public and private networks, and the private network must be on a different subnet than the public network. Requirements related to private and public addresses are as follows:

  • Private addresses — Each node requires one static IP address for each network adapter that is used for the cluster private network. You must use static IP addresses that are not on the same subnet or network as the public network.
  • Public addresses — Each node requires one static IP address for each network adapter that is used for the cluster public network. Additionally, static IP addresses are required for the failover cluster and for the clustered mailbox server so that they can be accessed by clients and administrators. You must use static IP addresses that are not on the same subnet as the private network.

Enable Public and Private Network Interfaces on cluster nodes

Enable both a private network interface (which will be used for the cluster heartbeat) and a public network interface (which will be used for client communication) on EXMBX01-NODE1 and EXMBX01-NODE2.

Configure Network Connections for CCR Replication

This section explains how to configure the network connections for a Microsoft Exchange 2007 SP1 clustered mailbox server in a cluster continuous replication (CCR) environment. Proper configuration of the network connections is necessary to ensure that client connections to the cluster server are possible and occur in a timely manner. There are three procedures that must be performed on both cluster nodes prior to forming the clusters:

  • Configure the public network connections.
  • Configure the private network connections.
  • Configure the network connection order.

Procedure W03-DWHE.16: To configure the public network connections for a clustered mailbox server

  1. In the Network Connections console, rename your public network connection (for example, Public).

  2. Select Internet Protocol (TCP/IP) and File and Printer Sharing for Microsoft Networks for the public network connection.

Procedure W03-DWHE.17: To configure the private network connections for a clustered mailbox server

  1. In the Network Connections console, rename your private network connection (for example, Private).

  2. Select Internet Protocol (TCP/IP), File and Printer Sharing for Microsoft Networks, and Client for Microsoft Networks for the private network connection.

  3. Configure a static IP address and subnet mask for the connection. Ensure settings for Preferred DNS server and Alternate DNS server are left blank.

  4. Configure advanced TCP/IP settings. Verify the following information:

    • On the DNS tab, under DNS server addresses, in order of use, ensure that no addresses are listed.
    • On the DNS tab, ensure that the Register this connection's addresses in DNS check box is cleared.
    • On the WINS tab, ensure that Disable NetBIOS over TCP/IP is selected.

Procedure W03-DWHE.18: To configure the network connection order for a clustered mailbox server

  1. In the Network Connections console, open the Advanced Settings page from the Advanced menu.

  2. On the Adapters and Bindings tab, under Connections, make sure that your connections appear in the following order:

    • Public
    • Private
    • Remote access connections

Create a New Cluster

In this section, you will create a new cluster and perform cluster configurations.

Procedure W03-DWHE.19: To use the new server cluster wizard to create a new cluster

  1. Log on to the first cluster node (EXMBX01-NODE1) as Fabrikam\Administrator

  2. Open a Command Prompt window, and run the following command:

      Copy Code
    cluster /create /wizard
    
  3. The New Server Cluster wizard appears. Follow the on-screen instructions to configure the new cluster. During the process, you will be prompted to enter the cluster name (EXCLUS01), unique cluster IP address, and the cluster service account (ClusterAdmin) credentials.

    Note:
    The wizard may warn you if it does not find a shared storage for a quorum. This warning is expected and can be ignored.On the Proposed Cluster Configuration page, click Quorum. Select Majority Node Set from the drop-down box.

Procedure W03-DWHE.20: To install a second (passive) node in the cluster

  1. Log on to the first cluster node (EXMBX01-NODE1) as Fabrikam\Administrator.

  2. Open a Command Prompt window, and run the following command:

      Copy Code
    cluster /cluster:<ClusterName> /add /wizard
    
    Note:
    In this reference architecture, the Cluster name is EXCLUS01.
  3. The Add Nodes wizard will appear. Follow the on-screen instructions to add the second node (EXMBX01-NODE2) to the cluster (EXCLUS01).

You can verify that the cluster service is running and the cluster is operational by running the cluster group at a command prompt.

Procedure W03-DWHE.21: To validate the cluster configuration

  1. Log on to the first cluster node (EXMBX01-NODE1) as Fabrikam\Administrator.

  2. Open a Command Prompt window, and run the following command:

      Copy Code
    cluster group
    
  3. The Status of the cluster group should be displayed as Online.

Procedure W03-DWHE.22: To configure the cluster networks for the cluster heartbeat and network priority order

  1. On EXMBX01-NODE1, open the Cluster Administrator console.

  2. Configure the private network interface properties. Verify that Enable this network for cluster use and Internal cluster communications only (private network) are selected.

  3. Configure the public network interface properties. Verify that Enable this network for cluster use and All communications (mixed network) are selected.

  4. Configure network priorities settings. Change the order of the interfaces so that the private interface is listed first.

Configure the File Share Witness

After the cluster has been formed and configured, the file share witness must be configured. CCR uses the file share witness on a third computer to avoid an occurrence of network partition within the cluster. The file share for the file share witness can be hosted on any server running the Microsoft Windows operating system. However, we recommend that you use a Hub Transport server in the same Active Directory Site as the cluster nodes to host it.

Procedure W03-DWHE.23: To create and secure the file share for the file share witness

  1. Log on to EXHUB01 as a Fabrikam\Administrator.

  2. Create a directory that will be used for the share by running the following command at a command prompt:

      Copy Code
    mkdir C:\MNS_FSW_DIR_MBX01
    
  3. Create the share by running the following command:

      Copy Code
    net share MNS_FSW_MBX01=C:\MNS_FSW_DIR_MBX01 /GRANT:Fabrikam\ClusterAdmin,FULL
    
  4. Assign permissions to the share by running the following command:

      Copy Code
    cacls C:\MNS_FSW_DIR_MBX01 /G BUILTIN\Administrators:F Fabrikam\ClusterAdmin:F
    
  5. Verify that the share is viewable from the first cluster node by running the following command from EXMBX01-NODE1:

      Copy Code
    NET VIEW \\Exhub01
    

    You should see your share MNS_FSW_MBX01 listed.

Procedure W03-DWHE.24: To configure the MNS quorum to use the file share witness

  1. Log on to the first cluster node (EXMBX01-NODE1) as Fabrikam\Administrator.

  2. To configure use of the MNS quorum, run the following command at a command prompt:

      Copy Code
    Cluster res "Majority Node Set" /priv MNSFileShare="\\EXHUB01\MNS_FSW_MBX01"
    
  3. You will receive a message that states "the properties were stored but not all changes will take effect until the next time the resource is brought online". This is expected behavior.

  4. Run the following command to restart the resource by moving the cluster group to the second cluster node:

      Copy Code
    Cluster group "Cluster Group" /move
    
  5. Repeat the command in Step 4 to complete the configuration and return the cluster group to the first node.

  6. To check the value of the file share property, run the following command:

      Copy Code
    Cluster res "Majority Node Set" /priv
    

Ensure That All Cluster Nodes Are Online

To successfully install Exchange 2007 SP1 on a server in a cluster, all cluster nodes must be online.

Procedure W03-DWHE.25: To ensure that all Cluster Nodes are online

  1. In Cluster Administrator, select the cluster name under the root container.

  2. In the details pane, under State, ensure that all cluster nodes are Online.

  3. Ensure that the Exchange cluster resources are currently active on the first node, EXMBX01-NODE1.

Install the Mailbox Server Role on the Active Cluster Node

Once you have completed the previous installation and configurations, you can install the Mailbox Server role on the first (active) cluster node.

Procedure W03-DWHE.26: To install the Mailbox server role on the Active cluster node

  1. Log on to EXMBX01-NODE1 as Fabrikam\Administrator.

  2. Using the Exchange 2007 SP1 installation media, run Exchange 2007 SP1 setup from the command line specifying the Mailbox server role:

      Copy Code
    Setup /mode:install /roles:MB
    
  3. After the setup is complete, open a command prompt and navigate to the Program Files directory, and then navigate to the bin directory under the Exchange program files. By default, the installation file location is <systemdrive>:\Program Files\Microsoft\Exchange Server\bin.

  4. Run the following command to create the clustered mailstore:

      Copy Code
    ExSetup /newcms /CMSname:EXMBXCLUS01 /CMSIPAddress:<ClusteredMailboxServerIPAddress>
    
    Note:
    EXMBXCLUS01 is a virtual server, and the IP address should be a unique IP in the subnet.

    Both CMSname and CMSIPaddress are required parameters and should be different from the Cluster Name and IP address:

    • CMSname is the name of the clustered mailbox server.
    • CMSIPAddress is the IP address of the clustered mailbox server, resolvable by DNS.

    For more information, see Exchange Cluster Resources for Clustered Mailbox Servers.

Configure Storage and Volume Mount Points on the Active Cluster Node

In Exchange 2007 SP1 cluster continuous replication (CCR), there is no shared storage between the cluster nodes. Each node has dedicated volumes (also known as LUNs), and log shipping is used to replicate data between the nodes. For CCR clustering, it is a best practice to:

  • Separate the storage into individual LUNs at the hardware level, and do not create multiple logical partitions of a LUN within the operating system.
  • Separate the transaction logs and databases and house them on separate physical disks to increase fault tolerance.
  • Separate the active and passive LUNs on completely different storage arrays so that the storage is not a single point of failure.

With a maximum of 50 Storage Groups, it would be easily possible to run out of available drive letters. You can take advantage of the Volume Mount Points feature of Windows Server in order to surpass the 26-drive-letter limitation. By using volume mount points, you can graft, or mount a target partition into a folder on another physical disk.

Work with your storage vendors to find the storage solution that will meet your requirements for performance, capacity, and scalability. This reference architecture does not contain prescriptive guidance on storage configuration. However, it shows how to use the Volume Mount Points feature to surpass the 26-drive-letter limitation.

In this procedure you will use Disk Manager to mount high-performance disk arrays under Volume Mount Points. This procedure assumes that you have several unpartitioned disk volumes (or LUNs) attached to each cluster node.

Procedure W03-DWHE.27: To create Mount Points for the databases

  1. Log on to EXMBX01-NODE1 as Fabrikam\Administrator. Open Disk Management by running diskmgmt.msc at a command prompt.

  2. On EXMBX01-NODE1, select the high-performance disk volume on which you want to create a mount point for the database.

  3. Create a primary partition, create an NTFS folder for the mount point (for example, C:\MountPoints\EXMBXCLUS01-SG01Data), and then format the partition using the NTFS File System.

  4. Repeat steps 2 - 3 to create a second mount point for a database (for example, EXMBXCLUS01-SG02Data).

Note:
The first mount point is used for the default First Storage Group. The second one will be used later for a storage group used by Hosted Messaging and Collaboration.

Procedure W03-DWHE.28: To create Mount Points for the Transaction Logs

  1. Log on to EXMBX01-NODE1 as Fabrikam\Administrator. Open Disk Management by running diskmgmt.msc at a command prompt.

  2. Select the high-performance disk volume on which you want to create a mount point for transaction logs.

  3. Create a primary partition, create an NTFS folder for the mount point (for example, C:\MountPoints\EXMBXCLUS01-SG01Logs), and then format the partition using the NTFS File System.

  4. Repeat steps 2 - 3 to create a second mount point for transaction logs (for example, EXMBXCLUS01-SG02Logs).

Note:
The first mount point is used for the default First Storage Group. The second one will be used later for a storage group used by Hosted Messaging and Collaboration.

Configure Storage and Volume Mount Points on the Passive Cluster Node

Now you will create volume mount points on the second (passive) cluster node that exactly mirror the ones on the first (active) node.

Procedure W03-DWHE.29: To create a mount point for the first databases

  1. Log on to EXMBX01-NODE2 as Fabrikam\Administrator. Open Disk Management by running diskmgmt.msc at a command prompt.

  2. Select the high-performance disk volume on which you want to create a mount point for the database.

  3. Create a primary partition, create an NTFS folder for the mount point, and then format the partition using the NTFS File System.

    Note:
    The directory and name of the new NTFS folder for the mount point must be identical as the database mount point on the active node (for example, C:\MountPoints\EXMBXCLUS01-SG01Data).
  4. Repeat steps 2-3 to create a second mount point for a database (for example, EXMBXCLUS01-SG02Data).

Procedure W03-DWHE.30: To create a mount point for the first Storage Group's Transaction Logs

  1. Log on to EXMBX01-NODE2 as Fabrikam\Administrator. Open Disk Management by running diskmgmt.msc at a command prompt.

  2. Select the high-performance disk volume on which you want to create a mount point for transaction logs.

  3. Create a primary partition, create an NTFS folder for the mount point, and then format the partition using the NTFS File System.

    Note:
    The directory and name of the new NTFS folder for the mount point must be identical as the transaction logs mount point on the active node (for example, C:\MountPoints\EXMBXCLUS01-SG01Logs).
  4. Repeat steps 2-3 to create a second mount point for transaction logs (for example, EXMBXCLUS01-SG02Logs).

Create New Storage Groups and Databases on the CCR Cluster

Procedure W03-DWHE.31: To dismount and remove the default Storage Group and Mailbox Database

  1. Open the Exchange Management Console on EXMBX01-NODE1, expand Server Configuration, and then click Mailbox.

  2. In the center pane, click EXMBXCLUS01. Expand First Storage Group in the result pane.

  3. Dismount the mailbox database.

  4. After the database has dismounted successfully, remove the mailbox database.

  5. After the mailbox database has been removed, remove the first storage group.

Procedure W03-DWHE.32: To create a new Storage Group and Database on the CCR Cluster

  1. Open the Exchange Management Console on EXMBX01-NODE1, expand Server Configuration, and then click Mailbox.

  2. In the center pane, right-click EXMBXCLUS01 and select New Storage Group.

  3. Specify a storage group name, for example EXMBXCLUS01-SG01. Specify the Log Files path and System Files path. Place the log files and system files in the appropriate Volume Mount Point or drive/directory, for example, C:\MountPoints\EXMBXCLUS01-SG01Logs.

  4. Right-click the new storage group (EXMBXCLUS01-SG01) and select New Mailbox Database.

  5. Specify a mailbox database name (for example, EXMBXCLUS01-SG01-HostedMailstore01). Specify the Database file path. Place the database files in the appropriate Volume Mount Point or drive/directory, for example, C:\MountPoints\EXMBXCLUS01-SG01Data. Verify that Mount this database is selected.

Note:
Place the Storage Group files and Mailbox Database files in the appropriate Volume Mount Point locations.

Procedure W03-DWHE.33: To create a second Storage Group and Database on the CCR Cluster

  1. Open the Exchange Management Console on EXMBX01-NODE1, expand Server Configuration, and then click Mailbox.

  2. In the center pane, right-click EXMBXCLUS01 and select New Storage Group.

  3. Specify a storage group name, for example EXMBXCLUS01-SG02. Specify the Log Files path and System Files path. Place the log files and system files in the appropriate Volume Mount Point or drive/directory, for example, C:\MountPoints\ EXMBXCLUS02-SG02Logs.

  4. Right-click the new storage group (for example, EXMBXCLUS01-SG02) and select New Mailbox Database.

  5. Specify a mailbox database name (for example, EXMBXCLUS01-SG02-HostedMailstore02). Specify the Database file path. Place the database files in the appropriate Volume Mount Point or drive/directory, for example, C:\MountPoints\ EXMBXCLUS02-SG02Data. Verify that Mount this database is selected.

Note:
Although this Deployment Walkthrough has steps to create only two storage groups and databases, you can repeat the steps to create as many Storage Groups and Databases as you will initially need.

Install the Mailbox Server Role on the Passive Cluster Node

Now, you can install the Mailbox Server role on the passive node.

Procedure W03-DWHE.34: To install the Mailbox Server role on the passive cluster node

  1. Log on to EXMBX01-NODE2 as Fabrikam\Administrator.

  2. Using the Exchange 2007 SP1 installation media, run Exchange 2007 SP1 setup from the command line specifying the Mailbox Server role:

      Copy Code
    Setup /mode:install /roles:MB
    

All storage groups defined for the clustered mailbox server must be seeded on the new passive node. Seeding is the process of making available a baseline copy of a database on the current passive node. Automatic seeding should take place as a result of installing the mailbox server role on the passive node. In this procedure you will verify that automatic seeding has taken place.

Procedure W03-DWHE.35: To verify that the automatic seeding has occurred on the passive node

  1. Log on to the passive node (EXMBX01-NODE2) as Fabrikam\Administrator.

  2. Open the Exchange Management Shell, and then navigate to the Microsoft Exchange installation files. By default, the installation file location is <systemdrive>:\Program Files\Microsoft\Exchange Server\bin.

  3. Run the following command:

      Copy Code
    Get-StorageGroupCopyStatus
    
  4. If the Storage Groups have a status of Healthy, then automatic seeding has occurred successfully. If for some reason Automatic Seeding has not taken place, refer to the Exchange 2007 SP1 Help file, and search for "How to Seed a Cluster Continuous Replication Copy".

Verify the Ability to Move a Clustered Mailbox Server between the Nodes in the Cluster

After you complete the installation of a CCR solution, or after you make significant configuration changes, we recommend that you verify that both nodes are correctly configured to support the clustered mailbox server by moving the exchange cluster resources between nodes of the cluster.

Procedure W03-DWHE.36: To verify the ability to move a clustered mailbox server between the nodes in the cluster

  1. Open the Exchange Management Console on the passive node, expand Server Configuration, and then click Mailbox.

  2. In the center pane, right-click EXMBXCLUS01 and select Manage Clustered Mailbox Server.

  3. Follow the on-screen instructions to move the clustered mailbox server to EXMBX01-NODE2.

  4. Verify that the move has completed successfully by opening the Cluster Administrator MMC and noting the state of the exchange cluster resources.

  5. Repeat steps 2-4 to move the exchange cluster resources back to EXMBX01-NODE1.