InterWeb.org.uk

Interweb Blog Hyper-V, App-V, Dell Equallogic, Forefront TMG Security, DPM 2010, SCVMM

11Mar/111

Hyper-V Server 2008 R2 with Cluster Shared Volumes – Quick Overview

Virtualization I am sure has been on everyone's agenda for a while now, but fully embracing the virtual world wasn't quite there yet for us.  We were running VMware ESXi for little over 18 months when we decided to take a look at Microsoft's Hyper-V offering (mainly due to cost).  So we installed the Hyper-V server role on a full fat install of Windows Server 2008 R2 pre SP1, so dynamic memory was merely a rumour at this point! first impressions were good and after looking at Cluster Shared Volumes and alot of further testing we decided to go ahead and move our proof of concept into production.

Planning Physical to Virtual Conversions - Capacity Planning has to start somewhere !

If you are planning to convert existing physical servers to Hyper-V's with VMM 2008 R2, there is a good tool from Microsoft, the MAP toolkit (Microsoft Assessment and Planning toolkit).  The MAP toolkit is agentless and will help you gather the required information for you to plan your Hyper-V assault, providing information on candidates for virtualization and auto generating reports.  Also with the Microsoft Integrated Virtualization ROI Tool you can calculate potential power cost savings with Hyper-V before deploying, some useful information to have in your arsenal!

P2V planning also gives you a good starting point to spec up your hardware and plan your SAN capacity.

Hardware.

Getting the hardware right is critical as adding or changing hardware after the cluster has been created is not just unadvisable but difficult I have read others exploits of having to recreate a cluster due to hardware changes. 

  • Processor: x64 compatible processor with assisted virtualization technology enabled - all processors in your cluster should be from the same manufacturer, Hardware Data Execution Prevention (DEP) available and enabled in the BIOS, DEP is required to protect the Hypervisor from¬†an application or service from executing code from a non-executable memory region and should be taken seriously.¬† There are options available in VMM to allow migration of a VM to a¬†host with a different processor, note that this is different version of the same manufacturer not a different manufacturer.
  • Hard Disk: Your HDD¬†needs to be ¬†atleast 8GB but Microsoft recommends 20GB,¬†We have 120GB disks in a¬†RAID¬†1 (Mirror) configuration, no applications should be installed on the Hypervisor partition, well maybe¬†Anti-Virus but¬†I will discuss¬†this later on.
  • RAM: With your MAP toolkit reports to hand you will be able to plan your RAM capacity alot better.¬† Remember that¬†512 MB is reserved for each host¬†and that there is a overhead of upto 32MB for the first GB of RAM and another 8MB for every additional GB of RAM assigned to that Virtual Machine.¬† Upgrading the RAM in the cluster is an easy task, I have carried this out on each cluster node before with no issues.
  • Cluster Storage: Confirm Compatability of your storage! If you are planning on using the Cluster Shared Volumes feature then you will need a SAN of some description¬†iSCSI or Fibre channel, preferably from a vendor that supports Hyper-V, Dell's Equallogic¬†iSCSI SAN¬†offering, is what we currently use.¬† Your cluster storage will store your VM's and maybe some ISO's, I also have a smaller LUN for staging new VM's and performing maintenance tasks.
  • Network:¬†We have 4 NIC's on each host, one to provide VM's with network access connected to the LAN,¬†one for Cluster/live migration and 2 for iSCSI (MPIO).¬† all NIC's are gigabit enabled and the¬†iSCSI network interfaces are broadcom's NetXtreme II with¬†iSCSI offload on card, this offloads the iSCSI processing on to the NIC and away from the hosts processor.¬† As for switching, a separate private network is recommended for Cluster traffic and another for iscsi traffic, both should be gigabit. Remember that ¬†iSCSI traffic is unicast and will by nature flood your network so I would definitely recommend isolating this traffic on its own network, this is where all your VM's will be stored so getting this bit right is important.

with these basic requirements in mind, remember that with Hyper-V Server you can have up to 16 nodes in your cluster, with 64 logical processors and 1TB of RAM.  I would allow for N+2 failover as we have done with our cluster, this gives you some flexibility when performing maintenance tasks, patching/upgrading.  Microsoft recommends that one cluster host should be reserved for these tasks and not available for placement.  N+2 gives your cluster the capacity for 2 nodes to fail with all services still online.

Our setup is 6 Dell Poweredge R410 1u Rack Servers, with 4 NICS, 2x 120GB HDD's in RAID1, 32GB RAM and 2x Intel Xeon 2.4GHz processors.  Our Cluster Storage is a 3TB LUN on a Dell Equallogic PS4000/6000 iSCSI array.  We also have one server dedicated to running System Centre Virtual Machine Manager 2008 R2.

Software.

Hyper-V Server 2008 R2 is free and is basically Windows Server 2008 R2 Core with the Hyper-V and Clustering roles installed, you then licence the VM's you run.  Although we used the Enterprise Server 2008 R2 for POC we went with the Hyper-V Server 2008 R2 Core for production, we predicted that we would be unlikely to scale up to the capabilities of the Enterprise edition (watch this one come back and bite me!).  Windows Server 2008 R2 Enterprise Edition supports 8 CPU sockets and 2TB RAM .  Microsoft recommends Core installations as it has a smaller footprint, requires less patching and has a minimal overhead, how minimal the overhead is, is up for debate probably very minimal.

In all, the choices available are;

  • Hyper-V Server 2008 R2 Core
  • Windows Server Standard 2008 R2
  • Windows Server 2008 R2 Enterprise
  • Windows Server 2008 R2 DataCenter

Your choice will depend on the size of your environment and your capacity planning for the future.  Windows Server 2008 R2 Standard Edition does not support clustering or failover.  I would suggest more research on this.

All your hosts should be running the same operating system and be either full or Core, I would not recommend a mix of the two.

Installing.

Installing Hyper-V Server 2008 R2 is very quick compared to a full installation, installation on all hosts should to be identical. 

Firstly after install you should identify which physical NIC maps to which Local Area Connection in the parent partition then rename them.  This will make things easier in the long run.  Something like;

  • Cluster
  • SAN( ¬†x2) for MPIO
  • External

Configure the NIC (External) that will be connected on the parent partition and apply any patches/security fixes in the parent partition then join to the domain.  this is the NIC I used in Hyper-V to Bind my virtual External network to the real world, this provides all my VM's with network access.

Now I will slip the anti-virus bit in here, "do I or don't I ?", all your IT expertise tells you that you should be running AV on your server installation.  I would recommend against any such practice with Hyper-V,  firstly I would not run any applications in the parent partition unless absolutely necessary this includes anti virus applications, and secondly there are some risks involved with running AV in your parent partition, most AV's have been designed without Hyper-V in mind and therefore kill actions that are perfectly OK thinking that they are malicious.  Therefore if you do decide to run AV in your parent partition because a policy says somewhere that's what you have to do then remember to exclude;

  • Default virtual machine configuration directory (C:\ProgramData\Microsoft\Windows\Hyper-V)
  • Custom virtual machine configuration directories
  • Default virtual hard disk directory (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks)
  • Custom virtual hard disk directories
  • Snapshot directories
  • Vmms.exe
  • Vmwp.exe
  • Anything located on a Cluster Shared Volume (C:\ClusterStorgae)

Microsoft have done a nice job of providing you with a configuration utility called sconfig (Server Config), from here you can enable the Failover Clustering feature, Remote desktop, and configure remote management, aswell as configure your NICS  and join your Domain.  I have enabled MMC remote management so you can remotely manage Hyper-V server with the Hyper-V snap-in available in RSAT, enabling these features opens the appropriate port in the Windows Firewall, any other ports application specific will need to be opened manually.

After you have connected your host to the network, patched it and decided wether or not to install AV you can start to configure failover clustering and CSV or Cluster Shared Volumes.

Fail-over Clustering.

"A fail-over cluster is a group of independent computers that work together to increase the availability of applications and services" - Thank you Microsoft.

Configure your Cluster NIC on a private IP range 192.168.2.0/24 your cluster traffic will be completely isolated from parent partition traffic and iSCSI traffic.  you will also need to configure you iSCSI NIC/s, I use 2 network interfaces both configured with private IP address's on the same subnet then you can MPIO for greater throughput and failover.

Your Cluster will need an IP address on the same network as your parent partition and a name/DNS entry.  Launch the failover Cluster Manager MMC and create a cluster, add all of your hosts, once all of your hosts are added the cluster will go through a validation process checking your hardware/network configuration.

You will need a Cluster Quorum disk, a Cluster Quorum or Witness disk holds a copy of the cluster configuration database, this disk will be a LUN on your SAN and will be accessible by all cluster members, Microsoft recommends that this disk will be 500Mb for NTFS and best performance, I would make it 700Mb just to be safe. You don't need to give your cluster Quorum disk a driver letter, also leave it as a basic disk.

When you have added you Quorum disk you can then configure it.  There are 4 options for configuring the Quorum Disk but I will only mention the two options that will be relevant to most installations.  How your Quorum will be configured all depends on how many nodes you have in the cluster and whether it is an odd or even number.  If there is an even number of nodes then the Quorum configuration will recommend;

  • Node Majority +Disk - This is when there is an even number of nodes in the cluster such as 6, like in our configuration.¬† If a vote between the nodes took place then there would be no majority winner with 3 votes to 3, this is when the¬†witness disk has the deciding vote (The quorum calculation is based on votes)¬†using the information contained within the database it holds.¬† With this configuration there will always be a majority winner in an even cluster node configuration.

If there is an uneven number of nodes in your cluster the the recommended configuration would be;

  • Node Majority - With this configuration there will always be a majority winner.

When adding or removing nodes from your cluster you will need to re-configure your Quorum settings to add or remove a Witness Disk from the cluster configuration.  When a Witness Disk is removed from the cluster maybe because you have gone from an uneven number of nodes to an even number of nodes the disk will then be placed in to available storage, I would personally always create a disk for a Quorum and  leave this in the available storage ready to come into play if required, what if you add another node or loose one through failure?  You would then require a Witness Disk.

 Cluster Shared Volumes.

Once you have configured a cluster Quorum disk you can then add a LUN to your hosts to use as a Cluster Shared Volume.  CSV's are only supported for Hyper-V and is a feature that you shouldn't be without.

Connect all the hosts in you cluster to the CSV LUN and format with NTFS, it needs to be a GPT partition so it can be expanded over 2TB, and give it a name CSV1/2/3 you can have more than one CSV in a cluster so the numerical value is advisable.

In the failover Cluster Manager navigate to the storage section add your new disk as a Cluster Shared Volume, you will need to enable cluster shared volumes first.  When you have added you CSV disk you will see that it has no drive letter assigned to it, this is right the CSV has a mount point of 'C:\ClusterStorage\Volume1' for the first CSV disk, don't worry your VM's are not actually stored on your C:\ partition, this is just a mount point for the Cluster Volume.

Now you can begin to test your cluster by creating VM's and live migrating them between hosts. 

That's pretty much it for installing Hyper-V server and setting up a Cluster Shared Volume, like my title suggests this was just a very quick overview of what's involved, and certainly won't tell you all you need to know,I always suggest thorough research.

I didn't cover live migration on purpose as this can get complicated, this guide from Microsoft Live Migration Network Configuration Guide is a good starting point.  Although I have not gone into Live Migration in any detail this setup with a dedicated private network for cluster/live migration does work and I have had no issues so far with live migration.

THE END......

I will use this post to blog about more specific items contained within, such as Dell Equallogic and switch configuration, Virtual Machine Manager 2008 R2 and Physical to Virtual Conversions, creating templates for Virtual Machines for quick deployment, also I will look at networking within Hyper-V and VLAN's, and a Backup solution using DPM 2010 with Hardware VSS, as well as MPIO using Dell's Host Integration Tool Kit for Equallogic.

    Comments (1) Trackbacks (0)
    1. I like the post, and the fuzzy background!


    Leave a comment

    No trackbacks yet.

    Get Adobe Flash player