Sonntag, 17. Juli 2011

Clustering MS SQL Server 2008 R2 on VMware Virtual Windows Server 2008 R2

The operating system needed for clustering MS SQL has to be Enterprise Edition at least if Windows Server 2008 R2 operating system will be used. It can be Datacenter Edition as well but I prefer Enterprise Edition this time. Why we couldn’t use Standard Edition or Web Edition though? Because, MS SQL clustering is using the core windows failover clustering feature  and this feature is not a part of Standard or Web Editions.
MS SQL Server 2008 R2 product can be Standard or Enterprise Editions. The most significant differentiator for selection between these two editions is the number of cluster nodes. If you plan only two nodes for failover clustering, Standard Edition will be sufficient. But if more than two cluster node is planned (up to 16 nodes), Enterprise Edition MS SQL is a necessity for your environment. I am going to use Enterprise Edition MS SQL Server 2008 R2 in my scenario. At first, I will create a two node cluster but I want to extend it in the future.
In my clustering scenario, I will cluster two virtual machines located on two VMware ESX host machines. Therefore there will be some requirements for this environment. Table 1 and Figure 1 represent the requirements:
Table 1: Clustering requirements for virtual machines on different ESX hosts
Operating SystemTwo virtual machines on two different physical machines (ESX hosts) run clustering software
NetworkingThe virtual machines share a private network connection for the private heartbeat and a public network connection
StorageEach virtual machine is connected to shared storage, which must be on a SAN


Figure 1: Virtual machines with their requirements

By all these information, I can sketch my environment as in Figure 2. The figure shows the network environment and the products that are going to be used.


Figure 2: Network Topology

All right. As I clear out the planned environment, let me talk about the steps that I have to complete throughout the scenario. Here is the list:
1-      Create a virtual machine on one of the ESX hosts
2-      Install Windows OS on virtual machine
3-      Clone the virtual machine to another ESX host
4-      Bind necessary disk volumes of shared storage (SAN) to virtual machines
5-      Enable and configure Failover Clustering on virtual machines
6-      Install MS SQL Server 2008 R2 on the first virtual machine with failover cluster support
7-      Add the second virtual machine to the already created SQL cluster
First is first, lets create the virtual machines. I open the vSphere Client and on the left part of the interface, I right click on one of the ESX host machines of my Vsphere environment and select “New Virtual Machine…” from the menu. The “Create New Virtual Machine” wizard appears and throughout this wizard, I select the values which are represented in Table 2.
Table 2: The values for “Create New Virtual Machine” wizard
OptionAction
ConfigurationSelect Typical.
Name and LocationEnter a name and select a location.
DatastoreSelect a datastore as the location for the virtual machine configuration file and the virtual machine disk (.vmdk) file.
Guest Operating SystemSelect the Windows Server operating system that you intend to install.
Create DiskEnter the appropriate value in Virtual disk size. Select Support clustering features such as Fault Tolerance to create a disk ineagerzeroedthick format.
Ready to CompleteSelect Edit the virtual machine settings before completion and click Continue. The Virtual Machine Properties dialog box appears.
When the properties of the new machine is opened, I select New NIC (adding). Under Network Connection, select a label from the Network Label list. This interface can be either for Public or Private network (by the way, these networks has to be configured before this action. Otherwise, I couldn’t add the NIC to virtual machine). I select the related network label for Public network now. Later, after the creation of virtual machine, I will add a second network interface and select the related network label for Private network, but it is not now.
When I click Finish button, the wizard creates the virtual machine. I DO NOT add the shared cluster disks at this time. Because I will clone this machine to another host after a short time and disks that are mounted as Raw Device Mapping (RDM) to virtual machines are converted to virtual disks during a clone operation by ESX server. In the virtual machine properties dialog box, I press Add button. I select Ethernet Adapter from the list and click Next. Now is the time to select Private Network label for this adapter. As you remember, our first interface is for Public Network. By clicking Next and Finish respectively, I complete the creation of virtual machine. After that, I boot the machine and install Windows Server 2008 R2 Enterprise Edition on it.
Now I can clone the newly created Windows Server 2008 R2 machine to another ESX host. I shutdown the virtual machine first and then I right click the virtual machine. I select Clone from the menu. I select/give values according to the actions in Table 3.
Table 3: Actions taken throughout the cloning wizard
PageAction
Name and LocationEnter a name and select a location.
Host or ClusterSelect the second host for the cluster setup.
Resource PartitionSelect the resource pool for the virtual machine, or select the host if no resource pools exist.
DatastoreSelect a datastore as the location for the virtual machine configuration file and the .vmdk file.
Disk FormatSame Format as source is selected.
CustomizationCustomize using the customization wizardis selected. In the opening wizard, all the required information is entered and on the screen before the last one, “create a new SID” checkbox is selected.
Ready to CompleteClick OK to create the virtual machine.
And our second virtual machine is ready with its two network adapters and Enterprise Edition operating system. Now, I can add shared disks to both virtual machines. Before that, I configure the Public and Private network IP addresses for both machines and test them whether a connection problem exists or not.
Note: For Private Network interface, no default gateway setting is set (Figure 3) and it has to be ordered after the Public Network interface via Advanced Ethernet Settings (Figure 4).


Figure 3: No default gateway is set for private network interface (heartbeat interface)



Figure 4: Public interface is listed before private interface

After I verify that everything is OK, I start to add the share disks to the first virtual machine. For this purpose, I select Edit Settingsfor one of my newly created virtual machines via the vSphere Client inventory.
On the opening screen, I select AddHard Disk and click Next. I select Raw Device Mappingson the next screen and click Next. A list of LUNs appears and I select an unformatted LUNhere (for LUNs to be listed here, all the creation of LUNs has to be finished on SAN storage device beforehand and these LUNs have to be configured as ESX servers can see them. Because these topics are not our concern in this article, I will not talk about them). After clicking Next, I select a datastore which has to be on a SAN. On the next screen, I select Physical as the compatibility mode and click Next.
The next screen is the controller selection screen. I select a new virtual device node (i.e. SCSI 1:0 or SCSI 2:0) and click Next. I have to select a new SCSI controller here. After that when I click Finish button, a new SCSI controller and a new hard disk are created by the wizard. There is one more thing to do about the hard disk that I created and this thing is the type of SCSI controller. In the virtual machine properties dialog box, I select the new SCSI controller and clickChange Type. On the opening dialog box, I select the LSI Logis SAS (if it is not) for the controller (this type is for Windows Server 2008 R2. Controller types for previous operating systems vary). After that I set the SCSI Bus Sharing to Physicaland the job is done. Figure 5 shows the newly created virtual machine with two ethernet adapters and FC SAN quorum disk.
Note: I want to add something to the subject here. Be cautious about your first shared storage is the disk that you plan to use as Quorum disk. When you will create your first Windows cluster, the failover cluster wizard will not ask about which disk are you going to use as Quorum disk. It automatically use the first shared disk as the quorum disk (although you can change it via the failover cluster administrative interface).


Figure 5: Newly created virtual machine (first node of the cluster)

With the same procedures, I can add more shared disks to the virtual machine as well. Therefore, I add two additional shared disks to the same machine so in total, I have three shared disks for one virtual machine (I will use these three disks for MS DTC and SQL Server clustering). As I finish the addition of shared disks to my first virtual machine, I have to add them to the second machine now.
The procedures are similar but there are two points to be careful about the addition of disks to the second virtual machine. First one is; I have to select the “Use an existing virtual disk” instead of RDM, in the wizard. Second one is the virtual device node selection. I have to selectexactly the samevirtual device node as I did for the first virtual machine. If I chose SCSI 1:0 for the first machine, I have to choose SCSI 1:0 for the second machine also. All of the remaining procedures are the same. When I finish the addition of shared disks to the second machine, all the work is done on the vSphere side. Figure 6 shows the final state of the two newly created virtual machines.


Figure 6: Newly created virtual machines for clustering

There are software, hardware, networking and accounting requirements for a two-node failover clustering. Therefore it could be easier to represent them as a table. Table 1 shows the requirements of a two-node failover cluster.
Table 1: Requirements for a two-node failover cluster
ComponentRequirement
SoftwareServers must have the same version of Windows operating system and same software updates and service pack as well. Failover Clustering feature has to be installed on the servers.
HardwareServersIt is recommended to use the same/similar server hardware and they have to be Windows 2008 compatible
Network adaptersMust be Windows 2008 compatible. For an iSCSI scenario, adapter must be dedicated to network communication or iSCSI, not both.
StorageMust be Windows 2008 compatible. At least two shared disks have to be configured, one for witness disk and the other for clustered application/service.
Device ControllersIf Serial Attached SCSI or Fibre Channel is used, in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version. Parallel SCSI is not supported.If iSCSI is used, each clustered server must have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network used for iSCSI cannot be used for network communication. Gigabit interface is recommended.
Networking InfrastructureNetwork settingsUse identical communication settings on network adapters (for example, Speed, Duplex Mode, Flow Control, and Media Type). In addition, compare the settings between the network adapter and the switch it connects to and make sure that no settings are in conflict.
DNSThe servers in the cluster must be using Domain Name System (DNS) for name resolution.
Domain RoleAll servers in the cluster must be in the same Active Directory domain.
Domain ControllerIt is recommended that the clustered servers be member servers.
ClientsFor testing purposes, one or more networked clients can be connected to the failover cluster that is created, and the effect on the client can be observed when a clustered server is moved or failed over from one cluster node to the other.
AccountingWhen a cluster is created, it must be logged on with an account that has administrator rights and permissions on all servers in that cluster. The account must have the Create Computer Objects permission in the domain.
As a matter of fact, my environment meets all the requirements listed in the table. But I am strongly recommend you to read the requirements and be sure to meet them in your environment. As I talked about the requirements, lets start the creation of two-node failover cluster.
First of all, I have to configure the network settings of public and private network interfaces (Note: I renamed these interfaces as LAN and Heartbeat, respectively). To do this, I open the network connections page via Control Panel (Start -> Control Panel -> Network Sharing Center -> Change Adapter Settings) and open the properties of public network interface (LAN) on the first server. I open the TCP/IP v4 properties and fill the necessary fields as they are shown in Figure 1.

Figure 1: IPv4 settings of public network interface
After that I open the properties of private network interface (Heartbeat) and fill the necessary fields as they are shown in Figure 2. I give an IP address from 100.100.100.0/24 IP block, as you can see in the figure.

Figure 2: IPv4 settings of private network interface
After that I repeat the same steps for the second virtual machine. Table 2 represents the values that I give to each server.
Table 2: IP values for each server
Server 1 (mstipcls01)Server 2 (mstipcls02)
LANHeartbeatLANHeartbeat
IP Adress192.168.2.101100.100.100.101192.168.2.102100.100.100.102
Subnet Mask255.255.255.0255.255.255.0255.255.255.0255.255.255.0
Default Gateway192.168.2.1<blank>192.168.2.1<blank>
Primary DNS Server192.168.2.22<blank>192.168.2.22<blank>
As you remember from the first part of the article, LAN interface has to be ordered before Heartbeat interface in the advanced settings (Figure 4 of the first part). After I verify it via the Advanced Network Settings, I control the communication between the two virtual machines by Ping command. All the verification process finishes smoothly in my case (I can successfully ping both machines from each other via each interface) and two machines are ready to be clustered in networking point of view. In this point, it can be a good idea to make the virtual machines  members of mstip.com domain.
After including the machines to the domain, the only thing left before the creation and configuration of cluster is enabling the Failover Cluster Feature for each machine from Server Manager snap-in. For this purpose I open the Server Manager snap-in (Start -> Administrative Tools -> Server Manager) and select “Add Features” from the Features field. From the opening wizard, I select “Failover Clustering” from the list (Figure 3) and complete it. After I finish the same procedure on the second server, I am ready to cluster the two machines.

Figure 3: Failover Clustering selection
Before cluster creation, I have to validate my virtual servers for clustering. For this purpose I open the “Failover Cluster Manager” snap-in (Start -> Administrative Tools -> Failover Cluster Manager) on one of my virtual machines. On the main screen which is seen in Figure 4, there is a “Validate a configuration…” link. I click on that link and a new wizard appears. The first screen is an informative one and by clicking Next button, server selection screen appears (Figure 5).

Figure 4: Main screen of “Failover Cluster Manager” snap-in

Figure 5: Server selection screen
Here I select my two virtual machines and add them to the list by the help of Browse button and click Next. In “Testing Options” screen, I select “Run all tests (recommended)” choice and press Next button again (Figure 6). The wizard starts to validate various components (Figure 7) and when it finishes the validation process, it creates a detailed report about what is wrong and what is OK. In my validation report, everything seems fine therefore I can proceed to the creation of cluster.

Figure 6: “Testing options” screen

Figure 7: Validation process working
To create a new cluster, I click on the “Create a cluster” link on the main screen of “Failover Cluster Manager” snap-in. It leads to a new wizard, of course :). On the first screen, I select my two virtual servers again and press Next button. On the second screen, I have to give a name and IP address to my new cluster. I fill the necessary fields as they are seen in Figure 8. This means that my newly created cluster will serve with this new name and IP address. The users or services that will utilize the cluster, won’t know about the real names and IP addresses of my two virtual machines. Figure 9 represents the nodes, cluster and their DNS records.

Figure 8: Cluster name and IP address

Figure 9: Representation of clustered nodes
After few Next buttons, I finish the wizard. This gives you a new working Windows Server 2008 R2 cluster. Now you check whether all the configuration is working without an error or not via “Failover Cluster Manager” snap-in. Actually the interface is helpful enough in a way that all the working components of the cluster are shown with a green OK sign.

With the release of MS SQL Server 2008 product (I refer it as SQL throughout the article), Microsoft introduced two installation options for it on a clustered environment. These are; Integrated installation and Advanced/Enterprise installation. Integrated installation satisfies the most common requirements for SQL Server deployments. It is used for creating a SQL failover instance, adding a node to an existing cluster, and removing a node from an existing cluster regardless of the original installation option. On the other hand, the Advanced/Enterprise installation option enables users to take advantage of a two-phase SQL Server failover cluster installation process. This is beneficial in enterprise deployment solutions that use technologies such as Systems Management Server (SMS) or scripted deployments. Table 1 represents the installation phases of the two methods.
Table 1: Installation phases
MethodPhases
Integrated installation
  1. Create and configure a single-node SQL Server failover cluster instance. When configuration of the node is successful, you have a fully functional failover cluster instance. At this point, it does not have high availability because there is only one node in the failover cluster.
  2. On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.
Advanced/Enterprise installation
  1. Prepare. Running the Prepare Failover Cluster setup on one node creates the Configuration.ini file that lists all of the settings that were specified. On the additional nodes to be prepared, instead of following these steps, you can supply the auto-generated Configuration.ini file from the first node as an input to the Setup command line. This step prepares the nodes to be clustered, but there is no operational instance of SQL Server at the end of this step.
  2. Complete. After the nodes are prepared for clustering, run Setup on the node that currently owns the shared disk resource. This step configures the failover cluster instance and finishes the installation. At the end of this step, you will have an operational SQL Server failover cluster instance and all of the nodes that were prepared previously for that instance will be the possible owners of the newly created SQL Server failover cluster.
Note: The Prepare step does not require an underlying Windows Server failover cluster to be established. The Complete step, however, does require that the underlying Windows Server failover cluster exists. If it does not, setup provides an error and exit.
In my scenario, I will prefer the first method which is Integrated Installation. I will first install SQL on one of my servers as usual which will create an SQL cluster instance and I will add the second server as a member of that instance. So, don’t waste time and start with the installation of first server.
Because I installed Windows Server 2008 R2 as the operating system, I don’t need any hotfix/service pack requirements to install for SQL deployment. The only thing that I will install before SQL installation is MS DTC (Microsoft Distributed Transaction Coordinator). Actually this is not a necessity but for active/active SQL failover clustering, MS DTC is a must. You can find details on http://technet.microsoft.com/en-us/library/cc730992(WS.10).aspx webpage.
Note: If you try to install SQL without MS DTC, you will receive a warning message like Figure 1. But that doesn’t mean you could not install SQL without a clustered MS DTC service.



Figure 1: Warning message about MS DTC

As you remember from the first part of the article, I created 3 shared disks for clustering and one of them is for MS DTC (the other two is for quorum and SQL). So, for installation of MS DTC in a clustered environment I log on one of my servers and open the “Failover Cluster Manager” snap-in. I right click the server name and select “Configure a Service or Application…”. This opens a wizard and first screen is an informative screen as usual. After I press Next button, I select “Distributed Transaction Coordinator (DTC)” from the list (Figure 2) and press Nextbutton. In this screen, I give a name of “mstipclusterdtc” and an IP address of192.168.2.103/24 for my new clustered service (Figure 3) and press Next. On the coming screen, I select one of my shared storages for MS DTC (Figure 4) and press Next. After the Confirmation and Summary screens, the wizard is finished and I have MS DTC service in a clustered environment.


Figure 2: “Distributed Transaction Coordinator” selection



Figure 3: Name and IP for MS DTC service



Figure 4: Storage for MS DTC Service

So it is SQL Server installation time :). For this purpose, I mount the ISO image of SQL installation DVD to CD-Drive of my first virtual server and run the setup.exe file. It opens a screen for me and I select Installation on the left side. Installation selection offers me five different links and one of them is “New SQL Server failover cluster installation” (Figure 5). I select it and an installation wizard appears.

TIP: Setup /SkipRules=Cluster_VerifyForErrors /Action=CompleteFailoverCluster


Figure 5: Installation type selection

The first three-four screens are listing the problems/requirements found in the environment that SQL installation will be performed. In my case, there is no problem and all the items are signed as green (Figure 6).


Figure 6: No problem at all :)

The next two screens are Product Key and License Term screens. After these, I fill all the checkboxes in Feature Selection screen as it is seen in Figure 7 (of course you have to choose whatever you need). In Instance Configuration screen, I give a name to my SQL cluster (Figure 8) and press Next button. After checking Disk Space Requirements screen, Cluster Resource Group screen appears. Here in this screen, some of the previously created resource groups are listed but they are all used by other clustered services. Therefore, to create a new resource group I directly press the Next button here (Figure 9).


Figure 7: “Feature selection” screen



Figure 8: “Instance Configuration” screen



Figure 9: New Cluster Resource Group creation for SQL Server

Cluster Disk Selection” is the screen that we assign a shared disk to our SQL cluster. After that “Cluster Network Configuration” screen appears. I give an IP address of 192.168.2.104for my SQL cluster here (Figure 10). I select the default and recommended selection (use service SIDs) in “Cluster Server Policy” screen. In “Server Configuration” screen, service accounts and collation types are selected/entered. After that “Database Engine Configuration” and “Analysis Services Configuration” screens appear. In these parts of the setup, we define the administrative user accounts for database engine and analysis services. Also database locations on shared disk are defined here (Figure 11). Let me remind you that, these screen appearances can be different in your environment if you select less feature than me ( I selected all the features at the beginning of the setup wizard, if you remember). The rest of the screens are informative or checking screens and by clicking Next button on each of them I finished the installation of SQL on my first machine successfully (Figure 12).


Figure 10: “Cluster Network Configuration” screen



Figure 11: Database location of SQL server



Figure 12: Wizard completion screen

Now it is time to add the second node to my SQL cluster. To do this, I logon to the second virtual machine and run SQL setup there. I select “Add node to a SQL Server failover cluster” on the opening screen (Figure 13). The wizard is almost the same but a short version. I mean that the first 4-5 screens are the same but after “Setup Support Rules” screen, “Cluster Node Configuration” screen appears (Figure 14) and I select my one and only SQL Cluster instance in that screen and finish the wizard by pressing Next buttons on the following screens.


Figure 13: “Add node to a SQL Server failover cluster” selection



Figure 14: “Cluster Node Configuration” screen

As I finished the SQL installation in a clustered environment and add the second node to my cluster instance, I have to check whether it works or not. I open the “Failover Cluster Manager” snap-in and  under “Services and Applications” I click SQL Server. The screen summarizes the condition of SQL service (Figure 15). To check whether the second cluster node handle the job when the first node fails, you can restart the first machine and control the“Failover Cluster Manager” snap-in on the second virtual machine. You have to see that owner of SQL services changes to the second node.


Figure 15: SQL Server summary screen



1 Kommentar:

  1. Hey very nice blog!!
    Hi there,I enjoy reading through your article post, I wanted to write a little comment to support you and wish you a good continuationAll the best for all your blogging efforts.
    Appreciate the recommendation! Let me try it out.
    Keep working ,great job!

    http://www.sqlservermasters.com/

    AntwortenLöschen

Rename Onedrive Business root folder

Rename Onedrive Business root folder Here is what I remember: In the Office 365 web admin pages, change the organization name to a shorte...