Share via


Creating a completely virtualized cluster using Hyper-V and Windows Storage Server 2008

I've been doing a lot of scripting work lately, automating product installs, OS configuation and deployment of our core application with a lot of PowerShell scripts, with the ultimate goal of being able to automate a "bare metal" installation of our client's entire environment using the Microsoft Deployment Toolkit (MDT).  Part of this process which has been challenging is testing the scripting of SQL Server installations for a multitude of scenarios, such as installing only a subset of features, upgrading between SQL Server 2005 to SQL Server 2008, and most importantly, the differences between installing SQL on a stand-alone server or a fail-over cluster.

 

Testing these scripts against stand alone machines isn't any big deal as I have a server I use as a Hyper-V host I use to test a lot of "what if" scenarios and can make, snapshot, revert, etc. virtuals at will.  But the cluster presents a challenge.  As happens in most environments, servers and SAN resources to create new clusters don't grow on trees, and while we do have development resources, since we're actively developing against them, I don't want to necessarily grind our primary development effort to a halt while I'm running installation scenarios on our cluster resources.

 

So I decided to see if I could virtualize my cluster.  I was pretty sure clustering the nodes wouldn't be any big deal, but I was less clear about how I'd provision the shared storage as I don't happen to have my own SAN lying about.  I did some research to see if anyone else had tackled this problem and stumbled upon this blog by our PFE Ireland team which pretty much detailed exactly what I wanted to do.  However, their example used the Starwind Software product to create an iSCSI target for shared storage.  Nothing against Starwind or their products, but I don't have a license (and need to keep this going past the 30 day eval) and I wanted to see if I could keep this entirely within the Microsoft family.

 

As noted on the PFE Ireland team blog, Windows Storage Server 2008 has been released and is available to MSDN users and provides, with the installation of the Microsoft iSCSI Software Target 3.2 add on, an iSCSI target which the cluster can use as shared storage for cluster resources.

 

My next dillema was hardware:  Part of my contribution to the environment has been to aggresively embrace virtualization technology and I am down to my Hyper-V host, a stand alone domain controller, and a couple of laptops - in short I had no free hardware to re-purpose as a dedicated Storage Server.  So I decided to see if I could get away with virtualizing Storage Server as well.  The process turned out to be remarkably easy, as detailed below.

 

Setting up Windows Storage Server 2008 on Hyper-V

 

 

    • Setup a new virtual machine using the Hyper-V manager -  I provisioned mine with 1 GB of RAM and 2 VHDs - one for the OS and one for storage.

 

    • Setup the Windows Storage Server 2008 operating system - this is remarkably easy.  It's more or less an abbreviated Server 2008 installation.  The only "gotcha" was that Storage Server utilizes a default password (wSS2008!) instead of prompting you for one.

 

    • Log onto Storage Server using the default password, then do your normal new machine maintenance (i.e. configure Windows Update, join to domain, configure security, etc.)

 

    • Run the Microsoft iSCSI Software Target 3.2 software.  This is a separate download, available in the same directory on MSDN as the Storage Server 2008 media.

 

    • Configure Windows Firewall to allow the iSCSI traffic - this helpful Technet article details the programs and ports needed to support the iSCSI service.

 

    • Configure the iSCSI target and storage.  The iSCSI Target is configured either under the Storage node in Server Manager, or via a dedicated MMC add in found in Administrative Tools.

 

      • Create the virtual disks:  Under "devices" in the iSCSI Target MMC, create or import the VHDs necessary to support your cluster.  I created 2 small (5 GB) VHDs for my Quorum and MSDTC disk resources, and a few larger VHDs to serve as my SQL shared storage.  I used my "second" VHD as the primary storage for these resources.

 

      • Bring the disks online

 

      • Create the iSCSI target:  Above the "Devices" node is the node "iSCSI Targets."  Simply right-click this node and choose "New iSCSI Target" to start the wizard.

 

        • NOTE:  You will be prompted by the wizard for an ICN of at least one iSCSI initiator.  If your cluster nodes are Windows 2008, the iSCSI initiator software is built in.  If your cluster nodes are Windows 2003, you will need to download and install the iSCSI initiator software.  In either case, run the software to create an ICN initiator name which will be used by your target for access.  If your nodes aren't ready for this step, you can always specify an IP address, but you'll have to run the initiator at some point.

 

      • Add the storage devices to your target.  Right click your new target and add the storage created in the previous step to the target.

 

    • Configure iSCSI initiators - right click the target and choose Properties.  You must list each of the ICNs (or IPs) for the nodes in your cluster here, otherwise the disk resource will not be available to them.

 

And that's it for the Storage Server - fairly easy.

 

On the cluster nodes

 

 

    • Run the iSCSI Initiator

 

    • Discover the Storage Server by entering its IP address or DNS name

 

  • Discovery (if your firewall settings are correct and the node's ICN or IP is configured with the target) should show the Storage Server as a target and all of the volumes created above.  Bind to all of the volumes.

 

And voila - we now have shared storage for a cluster.  The volumes should now show up as unallocated volumes in Disk Management.  At this point you can allocate, format, and configure the cluster.

 

I was surprised at how simple the process is for doing this - it feels a little weird sharing VHDs within a VHD, but I've had flawless reliability and great performance.  For my test needs this has been a great solution.

Comments