پیکربندی دستگاه استوریج EMC CLARiiON

پیکربندی دستگاه استوریج EMC CLARiiON

In this chapter you will learn the basics of configuring replication with EMC CLARiiON. As with Chapters 2 and 3 which covered Dell EqualLogic and EMC Celerra, this chapter is not intended to be the definitive last word on all the best practices or caveats associated with the procedures outlined. Instead, it’s intended to be a quick-start guide outlining the minimum requirements for testing and running a Recovery Plan with SRM; for your particular requirements, you should at all times consult further documentation from EMC and, if you have them, your storage teams. Additionally, I’ve chosen to cover configuration of the VMware iSCSI initiator to more closely align the tasks carried out in the storage layer of the ESX host itself.

EMC provides both physical and virtual storage appliances in the Fibre Channel market for which it is probably best known. However, like many storage vendors, EMC’s systems work with multiple storage protocols and will support iSCSI and NFS connectivity using its Celerra system. Like some other vendors, EMC does have publicly available virtual appliance versions of its iSCSI/NAS storage systems—specifically, its Celerra system is available as a virtual machine. However, at the time of this writing there is no publicly available virtual appliance version of the CLARiiON system.

I have two EMC systems in my rack: a Celerra NS-20 and a newer Unified Storage NS-120. The NS-120 supports the new features from VMware, such as vStorage APIs for Array Integration (VAAI) and the Unisphere management system which closely integrates with VMware such that you can tell a connected server is running ESX, and you can even inspect the contents of the virtual machine disks that make up the VM. Both units are remarkably similar in appearance, as you can see in Figure 4.1. Incidentally, both systems are shown here without their accompanying disk shelves.

Both the NS-20 and the NS-120 have been added into a domain in the Unisphere management console. In Figure 4.2, which shows the Unisphere management console, you can see that I have two CLARiiON systems. I’m going to assume you already have this configuration in place. If you don’t have the two CLARiiON systems listed in the same view, take a look at the domain section of Unisphere, which allows you to configure multi-domain management. Also, I’m going to assume work required at the fabric layer (WWNs and zoning) has already been carried out correctly.

Clariion-mirrorview- (01).jpg Clariion-mirrorview- (02).jpg

Figure 4.1 Two generations of EMC equipment, shown without their disk shelves

Clariion-mirrorview- (03).jpg

Figure 4.2 From the All Systems pull-down list it is possible to select other arrays from the list to be managed.

In early versions of SRM the EMC CLARiiON SRA required a consistency group for the SRA to work. This is no longer a requirement as of SRM 4.0. Consistency groups are used when you have multiple LUNs to ensure that replication or MirrorView keeps the state of those multiple LUNs in sync. Although consistency groups are no longer a hard requirement, they are a best practice, so I will show you how to create them.

Creating a Reserved LUN Pool

Depending on how your arrays were first set up and configured, you may already have a reserved LUN pool (RLP). This is required specifically for asynchronous replication. The RLP is used for all snaps and so is also required on the DR side at the very least, where snaps are used to present the VMFS volumes to the ESX host during testing mode. You can think of the RLP as an allocation of disk space specifically for the “shipping” of updates from the Protected Site array to the Recovery Site array. The snaps are not really used for “shipping” in the way they are with the Celerra; they are used to preserve a gold copy of the latest confirmed good copy at the DR side (in the event of a partial transfer) and to accumulate the changes that may be written to the production side to blocks in the process of actively being transferred. It is not unusual, in the world of storage arrays, for the vendor to carve out a chunk of storage specifically for sending and receiving updates, as above the RLP is the area where all storage for any type of snapping (for local or remote needs) is drawn. Storage vendors tend to have many different ways of doing this, with some opting to use a percentage of each volume/ LUN reserved for this storage.

The size and number of the RLP depend greatly on the number of changes to the LUN you expect to see between each update. The RLP is also used by SRM during the test phase when a snapshot is engaged on the Recovery Site secondary image (the LUN that receives updates). It may well be the case that an RLP has already been configured in the system; if you are unsure you can check by selecting the array in question in Unisphere, clicking the Replicas icon, choosing Reserved LUN Pool in the menu, and then selecting the Free LUNs tab, as shown in Figure 4.3.

At the same time, you may wish to check if you have the necessary LUNs for the write intent logs (WILs). These LUNs are created in a similar way to the RLP; they are like a transaction log in the system and they help the system to recover from unexpected outages. The WIL is normally a small allocation of storage that is used to hold this logging information.

Clariion-mirrorview- (04).jpg

Figure 4.3 The reserved LUN pool on the New York CLARiiON

If your CLARiiON array does not have an RLP, you can easily configure one by following these steps.

1. In Unisphere, select the Storage icon and then select LUNs from the menu.

2. Set a size for the LUN and the number you require—in my case, because my LUNs will be small and updates smaller still, I opted to create ten LUNs of 10GB each (see Figure 4.4). The LUN ID can be safely set to any value beyond what ESX can normally see (LUN 0-255) because the hosts will never interact directly with these volumes.

3. Specify a name for each LUN, such as RLP, and a starting ID, such as 00. This will create a naming convention of RLP_0, RLP_1, and so on.

4. Create the LUNs needed for the WILs.

In this case, I used a RAID group that was not part of the New York pool—and I allocated disk space just to hold these logs, as shown in Figure 4.5. It is worth noting that WILs which are used to avoid a full mirror resync in the event of a connection failure cannot be currently created in a pool.

5. Click Apply and then Cancel once the LUN creation process has ended.

Now that we have our required LUNs, we need to allocate these LUNs to the RLP and the WIL, respectively.

Clariion-mirrorview- (05).jpg

Figure 4.4 In the Create LUN dialog box you can create many LUNs of fixed size. Here I’m creating ten 10GB LUNs with a label of “RLP.”

Clariion-mirrorview- (06).jpg

Figure 4.5 Only a small amount of storage is required to hold the WIL

Clariion-mirrorview- (07).jpg

Figure 4.6 Once the LUNs have been created they can be allocated to the reserved LUN pool.

6. Click the Replica icon in Unisphere, select Reserved LUN Pool on the menu, and then click the Configure button. This will allow you to select all the LUNs in the list (RLP_0 to RLP_1) to the Global Pool LUNs list, as shown in Figure 4.6.

7. When you are done, click OK.

8. To allocate the WIL LUNs you just created, click the Configure the Mirror Write Intent Log link in the Configuration and Settings pane.

Creating an EMC LUN

EMC uses a concept called “RAID groups” to describe a collection of disks with a certain RAID level. In my case, RAID group 0 is a collection of drives used by ESX hosts using RAID5. This RAID group has been added to a storage group called “New_York_ Cluster1” to be used by various servers in my environment. In this section I will create a LUN in RAID group 0 held in the New_York_Cluster1 storage group. Then I will set up synchronous replication between the New York CLARiiON and the New Jersey CLARiiON using EMC MirrorView technology. Asynchronous replication would be used if your Recovery Site was a long distance away from your Protected Site, and is ideal for most DR environments. Synchronous replication keeps your data in a much better state, but is limited by distance—in practice, around 50–80 kilometers (31–80 miles). Some organizations may deploy synchronous replication to get their data in a good state off-site, and then follow through with asynchronous replication to get the data a suitable distance away from the Protected Site to protect the business from a DR event.

To create an EMC LUN, follow these steps. 1. Open a Web browser to the IP address used to access Unisphere; in my case, this is https://172. 168.3.79.

2. Log in with your username and password.

3. Select the CLARiiON array; in my case, this is APM00102402427 (this is the serial number of my CX4).

4. Select the Storage icon, and from the menu that opens, choose LUNs.

5. Click the Create LUN link in the LUNs pane, or click the Create button (see Figure 4.7).

6. In the dialog box that opens, select a free LUN ID number; I used LUN ID 60.

7. Set the LUN size in the User Capacity edit box; I used 100GB.

8. In the LUN Name edit box enter a friendly name. I used LUN_60_100GB_VIRTUALMACHINES, as shown in Figure 4.8.

This LUN number is not necessarily the LUN ID that will be presented to the ESX host; what’s actually presented is a host ID. The LUN number could be 300, but when it’s later allocated to the ESX host the administrator can allocate a host ID number residing between the values of 0 and 255 as this is the maximum number of LUNs an ESX host can currently see.

Clariion-mirrorview- (08).jpg

Figure 4.7 The Create LUN link enables you to open a dialog box to create a new datastore for the ESX hosts.

Clariion-mirrorview- (09).jpg

Figure 4.8 A 100GB LUN being created in the New York storage pool

9. Click Apply, and then click the Yes option in the next dialog box that opens to confirm your change. Once the LUN has been created, click OK to confirm the successful operation. The Create LUN dialog box will stay on-screen as it assumes you want to create a number of LUNs of different sizes; once you are done you can click the Cancel button to dismiss it.

Configuring EMC MirrorView

To configure EMC MirrorView, follow these steps. 1. In the LUNs view, right-click the LUN you wish to replicate; in my case, this is LUN_60_100GB_VIRTUALMACHINES.

2. In the menu that opens, select MirrorView and then Create Remote Mirror (see Figure 4.9).

Clariion-mirrorview- (10).jpg

Figure 4.9 Any volume can be right-clicked in this view and enabled for MirrorView.

3. In the dialog box that opens, select Synchronous, and enter a friendly name and then click OK. I named my remote mirror “Replica of LUN60 – Virtual Machines,” as shown in Figure 4.10.

After clicking OK, you will receive the Confirm: Create Remote Mirror dialog box, shown in Figure 4.11. For MirrorView to be successfully set up, a secondary mirror image needs to be created on the Recovery Site (New Jersey). This secondary mirror image is a LUN which is the recipient of updates from the Protected Site array.

The next step is to create this secondary image LUN in the Recovery Site CLARiiON (New Jersey).

4. Right-click the LUN that was just created, select MirrorView, and this time select the option Create Secondary Image LUN.

This will open a dialog box that displays the other Recovery Site CLARiiON arrays visible in Unisphere. In this case, Unisphere just selects the next available LUN ID for the secondary mirror. You can see in Figure 4.12 that the CK2 000 array is my EMC NS-20 in the New Jersey location.

5. Click OK and confirm the other ancillary dialog boxes. This will create a LUN at the Recovery Site CLARiiON (New Jersey). You can rename it if you wish to make it more meaningful. I called mine “LUN_60_100GB_NYC_VIRTUALMACHINES_MIRRORVIEW.”

Clariion-mirrorview- (11).jpg

Figure 4.10 Naming the remote mirror

Clariion-mirrorview- (12).jpg

Figure 4.11 A secondary mirror image must be created on the Recovery Site.

Clariion-mirrorview- (13).jpg

Figure 4.12 The array is CK2000

6. This new secondary image LUN must be added to the remote mirror created earlier. On the Protected Site CLARiiON (New York) click the Replicas icon, and in the menu that opens select the Mirrors option; this should refresh the display (in my case, it shows “Replica of LUN60 – virtualmachines”). Select Add Secondary Image or click the Add Secondary button, as shown in Figure 4.13.

7. In the corresponding dialog box select the Recovery Site CLARiiON (New Jersey) and expand the +SP A or +SP B to select the secondary image LUN created earlier, as shown in Figure 4.14.
You can reduce the time it takes to synchronize the data by changing the Synchronization Rate to High. This does increase the load on the array, so perhaps if you do change this it’s worth changing the setting once the first synchronization has completed. The initial sync required will cause a full synchronization of data from one LUN to another so that both LUNs have the same data state. In my case, because the LUN at the Protected Site is blank, it will save time if I don’t enable this option. This option can be useful in a manual failback process if the Protected Site has only been down for a short while during a DR event. There would be little point in carrying out a full sync if the difference between the LUNs was relatively small.

Clariion-mirrorview- (14).jpg

Figure 4.13 Once the MirrorView object is created a secondary image can be added that is the destination volume at the Recovery Site.

Clariion-mirrorview- (15).jpg

Figure 4.14 The destination LUN at the Recovery Site being used to receive updates from the LUN at the Protected Site

The recovery policy controls what happens if the secondary mirror image is inaccessible. Selecting Automatic forces a resync of the data without the administrator intervening, whereas selecting Manual would require the administrator to manually sync up the LUNs. The synchronization rate controls the speed of writes between the Protected Site CLARiiON (New York) and the Recovery Site CLARiiON (New Jersey). Most customers would choose an automatic reconnection, but for some environments manual is preferred—for example, where network communication fails regularly or on a scheduled basis and the administrator wishes to reestablish communication between the array manually.

Creating a Snapshot for SRM Tests

When testing an SRM Recovery Plan, the replicated secondary mirror LUN—in my case, LUN_60_1 00GB_NYC_VIRTUALMACHINES_MIRRORVIEW—is not directly mounted and made accessible to the ESX hosts. Instead, a snapshot of the secondary LUN is taken, and this snapshot is presented to the recovery ESX hosts during the test. This allows tests of the Recovery Plan to occur during the day without interrupting normal operations or the synchronous replication between the primary and secondary LUNs. During a test event in SRM the secondary mirror is marked as read-only and is only used to receive updates from the Protected Site array. A MirrorView target is only ever read-only—a promotion strips the flags off if communication still exists with the source array, forcing read-only flags on the source volume.

This snapshot is not created automatically. Instead, it’s created manually, and when created it must be named in the correct way for the EMC CLARiiON Site Recovery Adapter (SRA) to locate it. The name of the snapshot must contain the text “VMWARE_SRM_SNAP”—the snapshot does not need to have that test string at the start of the name; it can be anywhere in the body of the snapshot name. This procedure is carried out at both the Protected Site (New York) and the Recovery Site CLARiiON array (New Jersey). This will allow for full tests of the Recovery Plans, runs of the Recovery Plans, and both a test and a run of the failback process. For this reason, EMC recommends that you allocate a snapshot to both the primary and secondary image LUNs so that you can carry out failover and failback procedures with SRM, as well as ensuring that you have RLPs set up on both arrays.

To create a snapshot for SRM tests, follow these steps. 1. Within Unisphere on the Recovery Site array (New Jersey) select the secondary image LUN.

2. Right-click the LUN, and from the menu that opens, select SnapView and then Create Snapshot (see Figure 4.15).

3. In the dialog box that opens, enter the name of the snapshot; in my case, this is VMWARE_SRM_SNAP_LUN60 (see Figure 4.16).

It is possible to allocate the snapshot to a storage group, but in this case it is not necessary at this stage. As we discussed earlier, snapshots use an RLP, which you can think of as an allocation of storage purely for snapshots—as they (the snapshots) are not “free.” EMC documentation indicates that you should reserve 20% to 30% of the space in the RLP for the snapshot data. So, in my case, the 100GB volume would need around 20–30GB for the RLP. EMC also suggests that snapshots should be used as a method for creating replicas of a production volume where the change rate of data within that production volume does not exceed 20% to 30%. The entirety of the RLP is used for snap data.

Under the Replicas icon, in the Snapshot menu option, you should see a snapshot following the naming convention outlined a moment ago. Notice in Figure 4.17 how the snapshot is currently inactive because it is not in use. During a test of an SRM plan, you would expect this status to change to “Active.”

Clariion-mirrorview- (16).jpg

Figure 4.15 Each LUN must be allocated a snapshot. This snapshot is only engaged during a test of a Recovery Plan.

Clariion-mirrorview- (17).jpg

Figure 4.16 The snapshot name being set. The name must contain the text “VMWARE_SRM_SNAP” to work correctly.

Clariion-mirrorview- (18).jpg

Figure 4.17 The state of the snapshot is “inactive” and changes to an active state when a Recovery Plan is tested.

IMPORTANT NOTE: If you do engage in DR for real, or if you hard-test your plan, you likely will want to test the failback procedure before carrying out failback for real. For this test of failback to be successful you will need a similar snapshot ready at the Protected Site (New York). So repeat this process for the LUN at the Protected Site.

Creating Consistency Groups (Recommended)

Remember that, strictly speaking, the EMC CLARiiON SRA no longer requires consistency groups. However, you may find them useful, especially if you are replicating multiple MirrorView-enabled volumes. Testing has also shown that you are likely to avoid an administrative fractured state on some of your MirrorView volumes when failing back (forcing a full resync) if you use consistency groups.

To create a consistency group, follow these steps.

1. On the protected CLARiiON array (New York) select the Replicas icon, and from the menu that opens, click the Mirrors option.

2. On the right-hand side of the Mirrors pane, click the Create Consistency Group link.

3. Change the Mirror Type to be Synchronous.

4. In the same dialog, enter a friendly name for the group (see Figure 4.18), and then add the remote mirrors from the Available Remote Mirrors list to the Selected Remote Mirrors list and click OK.

This will create a consistency group in the Mirrors view in Unisphere containing, in my case, just one LUN (see Figure 4.19). As my system grows and I create more MirrorView¬protected LUNs I could add them to the same consistency group or create different consistency groups for different types of applications. As you will see later, consistency groups map almost directly to the Protection Group object in VMware SRM. After clicking OK, the consistency group will also be created at the Recovery Site CLARiiON (New Jersey).

Clariion-mirrorview- (19).jpg

Figure 4.18 Consistency groups can almost mirror SRM Protection Groups, gathering LUNs to ensure predictable replication across multiple LUNs.

Clariion-mirrorview- (20).jpg

Figure 4.19 Consistency groups enable you to track the LUNs configured for Mirror-View and monitor their state.

Granting ESX Host Access to CLARiiON LUNs

Now that we have created our primary, secondary, and snapshot objects, we can make them available to the ESX hosts. This should be a simple procedure of locating the storage groups that contain the ESX hosts and then allocating the correct volume to them.

At the Recovery Site CLARi iON (New Jersey)

At the Recovery Site, the ESX hosts will need to be granted access to both the MirrorView secondary image LUN and the snapshot created earlier for both tests and runs of the SRM Recovery Plans to work correctly.

To grant this access, follow these steps.

1. In Unisphere, select the Storage icon, and in the menu that opens, click Storage Groups and then right-click the storage group that contains your ESX hosts; in my case, this is called “New_Jersey_Cluster1.”

2. Choose Select LUNs in the menu or click the Connect LUNs button (see Figure 4.20).

3. Expand +Snapshots and select the snapshot created earlier; in my case, I called this “VMWARE_SRM_SNAP_LUN60.”

4. Expand the +SP A or +SP B and locate the secondary image LUN created earlier.

5. Select the LUN in the list; in my case, I called this “LUN_60_100GB_NYC_ VIRTUALMACHINES_MIRRORVIEW.”

6. Scroll down the Selected LUNs list, and under Host ID allocate the LUN number that the ESX hosts will use. In my case, as host ID 60 was available, I used it (see Figure 4.21).

Clariion-mirrorview- (21).jpg

Figure 4.20 At the Recovery Site, the ESX host must be granted rights to the LUNs and snapshot created during the MirrorView configuration.

Clariion-mirrorview- (22).jpg

Figure 4.21 Although the LUN ID is 60, the host ID can be any value between 0 and 255.

After clicking OK and confirming the usual Unisphere dialog boxes, you should see the LUN appear in the LUNs list in the storage group (see Figure 4.22). Notice how the description indicates this LUN is merely a secondary copy. The snapshot will only become “active” when you test your Recovery Plans in SRM.

At the Protected Site CLARi iON (New York)

Allocating the LUN and snapshot at the Protected Site is essentially the same process as for the Recovery Site. However, the status labels are different because this LUN is read-writable and is being mirrored to the Recovery Site location.

To allocate the LUN and snapshot follow these steps.

1. In Unisphere, select the Storage icon. In the menu that opens, click Storage Groups, and then right-click the storage group that contains your ESX hosts; in my case, this is called “New_Jersey_Cluster1.”

2. Choose Select LUNs in the menu or click the Connect LUNs button.

Clariion-mirrorview- (23).jpg

Figure 4.22 Under Additional Information, we can see the LUN is a secondary copy, and the snapshot is inactive.

3. Expand +Snapshots and select the snapshot created earlier. In my case, I called this “VMWARE_SRM_SNAP_LUN60.”

4. Expand the +SP A or +SP B and locate the secondary image LUN created earlier.

5. Select the LUN in the list. In my case, I called this “LUN_60_100GB_VIRTUAL-MACHINES.”

6. Scroll down the Selected LUNs list, and under Host ID allocate the LUN number that the ESX hosts will use. In my case, as host ID 60 was available, I used it.

After clicking OK and confirming the usual Unisphere dialog boxes, you should see the LUN appear in the LUNs list in the storage group. Notice how the description indicates that this LUN is marked as being “mirrored” (see Figure 4.23).

You should now be able to rescan the ESX hosts in the Protected Site and format this LUN. We can request a rescan of all the affected ESX hosts in the VMware HA/DRS cluster by a single right-click. After the rescan, format the LUN with VMFS and create some virtual machines.

Clariion-mirrorview- (24).jpg

Figure 4.23 At the Protected Site, we see the LUN is mirrored to the Recovery Site.

Using the EMC Virtual Storage Integrator Plug-in (VSI)

Alongside many other storage vendors EMC has created its own storage management plug-ins for vCenter. You might prefer to use these on a daily basis for provisioning LUNs as they are quite easy to use. In the context of SRM, they may speed up the process of initially allocating storage to your ESX hosts in the Protected Site. Once the hosts are provisioned, it will merely be a case of setting up the appropriate MirrorView relationship and snapshot configuration. Who knows; perhaps these plug-ins may be extended to allow configuration of SRM’s storage requirements directly from within vSphere. In addition to provisioning new storage, EMC VSI also has enhanced storage views and the ability to create virtual desktops using array-accelerated cloning technologies.

The following components should be installed on your management PC before embarking on an installation of the EMC VSI:

• Unisphere Service Manager

• EMC Solutions Enabler

• RTOOLS software (if you are using EMC’s PowerPath technologies)

• The NaviSphere CLI (if you are dealing with a legacy array like my EMC NS-20; the naviseccli is required for all CLARiiONs and will be used by the VNX family as well)

After you install these components and the EMC VSI, when you load the vSphere client you should see an EMC VSI icon in the Solutions and Applications tab with the “home” location (see Figure 4.24).

Clariion-mirrorview- (25).jpg

Figure 4.24 Installing the VSI adds an icon to the Solutions and Applications view in vCenter.

This icon will enable you to configure the plug-in so that it becomes aware of your CLARiiON and Celerra systems. In terms of the CLARiiON, it is merely a case of inputting the IP address of the storage processors (SP A and SP B) on the array together with a username of “nasadmin” and the password that was used when the array was set up (see Figure 4.25); you can set up a similar configuration for any Celerra systems you maintain.

Once correctly configured, the EMC VSI adds a Provision Storage option to the right-click of the VMware cluster and will take you through the process of both creating a LUN on the array and formatting the LUN for VMware’s file system VMFS (see Figure 4.26).

If you want to learn more about the EMC VSI, I wrote about its functionality on my blog, RTFM Education:

www.rtfm-ed.co.uk/201 1/03/01/using-the-emc-vsi-plug-in/

Clariion-mirrorview- (26).jpg

Figure 4.25 Entering the IP addresses of the storage processors, along with a username and password

Clariion-mirrorview- (27).jpg

Figure 4.26 VSI adds right-click context-sensitive menus to various parts of vCenter.

Summary

In this chapter I briefly showed you how to set up EMC CLARiiON MirrorView, which is suitable for use with VMware SRM. As I’m sure you have seen it takes some time to create this configuration. It’s perhaps salutary to remember that many of the steps you have seen only occur the first time you configure the system after an initial installation. Once your targets are created, your file systems and LUNs are created, and your replication relation-ships are in place, then you can spend more of your time consuming the storage.

From this point onward, I recommend that you create virtual machines on the VMFS volume so that you have some test VMs to use with VMware SRM. SRM is designed to only pick up on LUNs/volumes that are accessible to the ESX host and contain virtual machine files. In previous releases, if you had a volume that was blank it wouldn’t be displayed in the SRM Array Manager Configuration Wizard; the new release warns you if this is the case. This was apparently a popular error people had with SRM 4.0, but one that I rarely saw—mainly because I always ensured my replicated volumes had virtual machines contained on them. I don’t see any point in replicating empty volumes! In SRM 4.0 the Array Manager Configuration Wizard displays an error if you fail to populate the datastore with virtual machines. In my demonstrations I mainly used virtual disks, but I will be covering RDMs later in this book because it is an extremely popular VMware feature.

 

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *