Tuesday 17 May 2011

Virtual MS Failover Cluster migration between storages

Recently we  had a task for full data migraiton from old HP MSA 2312 FC to HP EVA 6400. The MSA was used mostly for VSphere and hosted several VMFS datastores. Basically, the whole procedure for migration was quite simple though it had to be done very carefully.


However, we faced a challenge. I have had experience on migration of VMs  between storages and VSpheres at the same time (and even the between different versions - 3.5 and 4.1), and basically I might easily make a draft migration plan. However this time there were 2 Production Virtual-Virtual MS Failover Clusters running in that VSphere. One failover cluster was running MS Windows 2003 and Exchange Server. Another cluster was running Windows 2008 Failover Cluster and SQL Cluster. Both clusters were using RDM disks in Physical Compatibility mode. Since services running on these cluster were quite important for  we couldn't have long downtime.


So I googled a bit for solutions on migration of Virtual -Virtual Cluster  and unfortunately I have found not much information about this specific task. There were two main approaches:
1. Includes copying info from old shared disks to new shared disks and then reconfiguring cluster for new disks - here is procedure from MS and a bit different procedure from NetworkAdmin.
2. Using vendor features, e.g. HP Continuous Access. That's what we used when we had to replace our old EVA3000 to EVA6400. 
None of these solutions were useful enough for us because the first one seemed a bit risky - playing around production cluster configuration is not a fun. Another one requires CA license which we didn't have. So we decided to test a nice solution I found in Vmware Communities. It provides instruction for Storage Migration of RDM disks and I would like to extend it by making a step-by-step guide for Virtual MS Cluster mgiration, first of all for myself, in case I will need to do it again someday.


Let's assume you have already installed new storage - either into your existing fabric or setup a new one, and each of your ESXi hosts already has connection to both storages. Also there should be new VMFS configured on new storage and added to all ESXi hosts.  Here we go:
1. Create new LUNs on EVA 6400. Please make sure that these LUNs are equal or bigger than source LUNs. Sometimes different models of storage can display this information in a different way, so it is wise to create destination LUNs a bit bigger. 
2. Present all new LUNs to all ESXi hosts and make sure all of them are visible.
3. Shutdown Virtual Cluster nodes.
4. Backup VMX configuration file of your cluster nodes. Make screenshots of your Cluster Nodes' Virtual  Machine Properties. This will help you at the final stage to recreate Cluster Configuration or restore an original one in case something goes wrong. Generally speaking, you need the disk order and information on disk mapping to virtual SCSI ports.
5. Remove RDM Shared disks from both Virtual Nodes.. Make sure you don't delete mapping files, but just remove RDM disks from VM configuration.

6. Use storage vmotion to move your Virtual Cluster nodes to VMFS on new storage
7. Use ssh to login to any of the ESXi servers. You can use VMA as well.
8. Now you need to get information about destination disks where we will clone the source RDM disks to. I like to use LUN number to identify destination disk address (that's why I prefer to keep LUN number assignment unique across all VSphere/SAN ):
a. first, check if there are active path to destination disk using destination disk LUN number 



b. check for the path to the destination disk that we will use when cloning .

I would recommend also comparing device naa. id we see here with the one you can find in your Storage Management GUI, e.g. EVA StorageWorks CommandView.

7. So now we are all ready to start. The syntax of the command in our case is
vmkfstools –i [source disk] –d [disk type] [VMFS destination path]

Let's go through all of the options we are going to use:
-i stands for --clonevirtualdisk command
[source disk] is location of RDM file, that is mapping file,
e.g./vmfs/volumes/VMFS-old/MSCluster/MSCluster_2.vmdk
–d [disk type] is where you define the destination disk type. In our case we will use rdmp type plus the path we got using esxcfg-scsidevs command,
e.g. -d rdmp:/vmfs/devices/disks/naa.6001438005dedb350000500004300000
[VMFS destination path] is the path to new vmdk file which will point to new mapping file, 
e.g. /vmfs/volumes/VMFS-new/MSCluster/MSCluster_2.vmdk

After we collect all the bits of information the final command should look like this one:
vmkfstools -i /vmfs/volumes/VMFS-old/MSCluster/MSCluster_2.vmdk -d rdmp:/vmfs/devices/disks/naa.6001438005dedb350000500004300000 /vmfs/volumes/VMFS-new/MSCluster/MSCluster_2.vmdk
8. Once you cloned all RDM disks you can go back to your Cluster Nodes Vritual Machine properties and add all newly cloned RDM disk back. Be sure you attach your RDM disks to the same virtual SCSI ports. That's when you can find information we collected in step 4 very useful. 
9. Power up the first node and make sure node can see all disks, all services started and all cluster resources are available to the cluster. Power up the second node. Test different failover scenarios: 
a. move resources between nodes 
b. stop the cluster service on the node and make sure if fails over to second node.
c. Simulate power failure by powering off one node and check if all cluster resources fail over to second node.


So, now we can report to the boss that migration is successfully finished. Remember that you can always roll back you cluster configuration since all disks are left intact on old storage. 


Another interesting point made by my colleague was that  ESXi and vmkfstools can facilitate any type of storage migration, not only RDM disks of VMs. Basically, you can easily migrate all LUNs from your old storage to the new one. You will only need one ESXi host connected to both storages, one VM (you even don't need to install  OS on it) and knowledge of vmkfstools syntax. The routine will be the following:


1. Present all source LUNs to ESXi host
2. Add LUN to VM as an RDM disk. At this point the VMDK and RDM mapping will be created.
3. clone the RDM disk to new storage. 
4. Reattach the new LUN to the server


That was quite a long article for the first post, although I tried to make it as short as possible and not to overload it with too many details. 


If you find this post useful please share it with any of the buttons below. 

2 comments:

  1. Hi, actually this information its very useful.
    I ask to do the same steps on SQL Cluster using RDM.
    I will test this steps and I hope I can do it correct :)

    ReplyDelete
  2. I have been looking for this process for a long time. I need to move Windows 2003 clusters with RDMs from ESX3.5 to ESXi5. Wanted to make sure if I can use your steps to migrate. Thanks.
    Shabanabaig@yahoo.com

    ReplyDelete