شبکه های SRM
While working on and implementing Site Recovery Manager 5.1 I had a few issues regarding the configuring of the networks required for testing and disaster recovery. This is mostly due to lack of information and documentation about this subject. The idea is, everyones network is different so you’ll have to sort it out with your network administrators. However, I think that although a stretched network is preferred, you seldom have one. In our case, the network between the protected and the recovery site is routed. Also, while we’re testing we use a cluster, as in multiple hosts. The automatic bubble that is created is useless, there is no communication possible between VMs on different hosts. Again, this seems quite like a standard setup. So this article will give some basic knowledge on the networks used in VMware Site Recovery Manager and how to configure them.
Which Networks Do We Have
Besides all kind of VLANs, virtual switches, port groups and physical switches there are three main networks to consider. Your production network, in which your protected VMs live during normal operation, the disaster recovery network in which the VMs will live in case there is a disaster, and the test network, where your VMs will be running in case of a test.
So, three networks are important for SRM:
-
Production
-
Disaster Recovery
-
Test
Note that because we won’t change IP addresses during tests or disaster recoveries these network should not be routable by default. Only the disaster recovery network should be made routable in case of a disaster.
Production Network
As it is very easy in virtual environments to create VMs, it is also very easy to create VLANs and move VMs in these VLANs. Since each network in use by VMs that will be protected should be present in the recovery site it is important to make an inventory of the networks in use.
You could use this PowerCLI command to create a list of all VMs in a cluster with their networks:
ForEach ($vm in (Get-Cluster "GetShifting Production"| Get-VM )){$nwname = Get-NetworkAdapter -VM $vm | ForEach {$_.NetworkName}; Write-Host $vm $nwname}
Or if you just want a list of networks in use you could use this command:
Get-Cluster "GetShifting Production" | Get-VMHost | Select -First 1 | Get-VirtualPortGroup | Select Name | Sort
Using Disaster and Test Network
Creating the networks on the physical switches is explained below, this part is about configuring VMs to use the correct network. Remember, we have to tell the VMs to use which network in which situation.
For example, we have a VM which uses the network “NetWork0-LAN”. We have created a disaster recovery network “SRM_Uitwijk” which should be used in case of a disaster recovery and we have created a test network called “SRM_Test”. See the table below to see where is configured what:
Network Environment | Network Name | Configured In |
---|---|---|
Production | NetWork0-LAN | Production VM Settings |
Disaster Recovery | SRM_Uitwijk | Inventory Mappings |
Test | SRM_Test | Recovery Plan |
Configuring Networks
Now, a very important fact has to be realized here. When you configure the test network you configure a network that has to be used for a VM during a test instead of the disaster recovery network. So, let’s see how this looks.
First configure the disaster recovery network. In vCenter, connect to the protected site and select the Network Mappings tab. Here you configure the disaster recovery network:
These networks have to be available of course.
Then configure the test network. Since this is done at the recovery plan you’ll have to do that for each recovery plan that will be part of a test. In vCenter, connect to the recovery site, navigate to the recovery plans, select the recovery plan you want to configure and click on “Edit Recovery Plan”. Then follow the wizard until you can configure the test network. Now, as said before, you configure a replacement network for the disaster recovery network:
Now complete the wizard.
Now the networks are correctly configured for both disaster recovery and tests.
Switch Configuration
Of course, to let the networks work between the hosts you’ll have to configure the physical switch(es) as well. In the example used above we need two separate VLANs, one for disaster recovery and one for testing.
We have a Nexus 5000 switch, and creating VLANs works like this:
conf t vlan 112 name SRM_Test exit vlan 911 name SRM_Uitwijk exit copy running-config startup-config
Of course, the hosts also need configuration. They should be in trunk mode, and the VLANs should be in the allowed VLAN list:
conf t int E1/27 description ESX22 switchport mode trunk switchport trunk allowed vlan 1,31,100-101,120,230,251,800 spanning-tree port type edge trunk exit copy running-config startup-config
You can check your various configurations using these commands:
sh run sh vlan brief sh int brief sh int status