نصب و پیکربندی اولیه Celerra Virtual Appliance

نصب و پیکربندی اولیه Celerra Virtual Appliance

Glad to see that people are having downloading and success with the Celerra VM on VMware Workstation based on this original post.  I wanted to provide a quick “HOWTO” to help, and will publish a 201 (setting up Replication and a 301 – configuring SRM).

Most folks are doing well if they just RTFM, but I thought a little “hand-crafted walkthrough” might help others.   The standard documentation is geared towards VMware Workstation, which is the officially supported target (BTW, you don’t need to ask, we’re making ESX OVF a standard package :-), so some people stumbled a bit when trying to use it with ESX.

I’ve updated the OVF (some of the problems people were having were a result of the actual previous OVF I posted – SORRY – sheepishly) .  Make sure you download the new one if you’re going to follow along (again, you can get that here)

Ok – have you got it?   Ok… then read on…

Quickly then here – in this HOWTO:

  1. Importing the Virtual Appliance
  2. Get the Celerra Sim up on ESX, and configuring the VMX
  3. Configuring network interfaces (and what they all mean)
  4. Making the Celerra Sim unique after cloning
  5. Licensing Celerra features
  6. Configuring Datamover interfaces and iSCSI targets
  7. Getting iSCSI LUNs to the ESX cluster
  8. getting NFS exports to the ESX cluster

I will post followups shortly:

  • 201 series (adding storage to the simulator, configuring snapshots, configuring remote replication, etc)
  • 301 series (configuring Site Recovery Manager, VDI mass replicas, etc).

In a little bit, I’m going to start posting bit by bit other EMC VMs (they are legion – ADM, Control Center, Avamar, Replication Manager, Networker 7.4.x, IT Compliance Analyzer, EMC Backup Advisor, and that’s just getting started) with similar guides, so let me know if this is useful – BTW – they are also going through the formal release and VM Appliance programs, but I’m notoriously impatient, and think many of you out there are smart enough to get some fun out of these without too much hand-holding 🙂

Read on……

Step 1: Download  and import.  You can get the Celerra Simulator here (or if you are an EMC employee, partner, or customer, you can get it from Powerlink: http://powerlink.emc.com and search for “Celerra Simulator”.   If you start with the official VMware Workstation build – you can use VMware Converter to import it, but make sure you revert the VM format to VMware Workstation 4 format before you do.    Otherwise, just start from my OVF.  Once you have the OVF , the first thing is to import the Celerra Simulator.

This video shows how to do Step 1:

You can download a high-rez version of this video here.

 

Step 2: Configure the VM.  Note that the VM needs a fair amount of RAM – 2GB if it’s standalone, and 3GB if it’s going to be replicating.   FYI – the SMALLEST real Celerra ships with 8GB between all it’s components.  This is classic EMC – at least from our hardware divisions  🙂  Though I shouldn’t say that.  Technically the smallest EMC hardware storage platform that WORKS (not on the HCL – yet) is the LIfeline, which costs about $500, though even smaller, cheaper stuff from iomega counts too..

This video shows how to do Step 2:

You can download a high-rez version of this video here.

NOTE: if it fails to start the blackbird service, or stays at that point during boot sequence: 1) just reboot the VM; if that doesn’t do it, disconnect the VM from the external network, then reconnect it after configuring the interfaces for your environment.  Sometimes this happens due to IP address conflicts on your network.

Step 3: Configure IPs. Next, you have configure ethernet interfaces.  This is a one-time task, but one where a lot of people get stumped.

They key to understanding is to understand the real architecture of a Celerra.    A real Celerra has 3 major architectural components: Control Station (s), Datamovers (filer heads), and Block back-end storage which is used by the Datamovers.  It’s architected this was for a couple reasons: 1) having a control station (rather than managing the Datamovers directly) means that simply scaling by adding Datamovers (up to 8 are supported in the larger Celerra configurations) is easy, and management stays simple.   This design also also keeps the control path separate from the data path, and means that the complex element (the datamover) never needs “GUI design work”; 2) the Block Storage being separate means that we can use CLARiiON HW to handle things like RAID, leaving the Datamover (filer head) to doing fileserving.   There are technical arguments either way on this architecture desing.   For comparison, the Celerra is most analogous to NetApp FAS devices, and in their case, the block storage is completely handled by the filer heads (including RAID), and you manage each of the two heads independently (unless you use their enterprise management tools).

Here’s a picture of the front of a Celerra (specifically the NS22FC):

image

And how it looks when it arrives:

image

And here is a physical picture of the back, which is the most important for this dialog.   The reason for this design is essentially two-fold: 1) the control station behing “out of band” makes security a bit stronger, and means that scaling is easier (since Celerras scale up to 8 datamovers, the control station gives you a single management GUI, rather than 8).

image

Now, here are the key interfaces, and how they map to the VM.  They key thing to understand in the Celerra Simulator (not the real Celerra), the VM itself is the Control Station, and the Datamovers are a service running in the control station.   OK, in the diagram below, it shows how **I** have mine configured, but NOTE that by default, cge0 and cge1 BOTH bind to eth0.

image

So – VM to vSwitch (aka Guest to physical world) mapping: vNIC 1 = etho, vNIC 2 = eth1 and vNIC 3 = eth2 in the VM.

and Guest to the Celerra: eth0 = CGE0, eth1 = CGE1 (again, this is in **my** config – by default, CGE1 also binds to eth0), eth2 = control station management GUI.

So – this is what you have logically represented:

image

Phew! (not hard, but I’m a big believer in core understanding as the basis of learning, rather than rote learning – that way you can extrapolate)

This video shows how to do Step 3:

You can download a high-rez version of this video here.

Step 4: making your Celerra VM unique. Next, we’ve got to do some weird steps, so follow carefully.   These are a good idea always – but mandatory when using SRM (and replicating Celerra VMs).   I’ll reiterate – you don’t need to do this if you’re just going to poke around or use the Celerra for simple shared storage.   So why are we doing this?   Well, in later HOWTO videos, we will configure replication and SRM.  Since the VM has been cloned – they have to be specified as unique.   There are three things that matter here: 1) the serial number of the array; 2) the control station name; 3) the MAC addresses of the CGE interfaces.   The “serial number” of the Celerra is generated from the VM UUID – and you just need to poke it to generate a new serial number.

Here’s how you do it.  There is a script in the /opt/blackbird/tools folder called init_storageID that updates the serial number of the Celerra Simulator. After you clone the VM and have the UID updated, just log into the Simulator as root and run this script. It will generate a new serial number based on the UID and reboot the system. When it comes back up, you should be able to replicate between the clone and the original

Then, you need to make sure the Control Station has a nice hostname, not the generic “localhost”.   Lastly, you need to remove and re-add the CGE interfaces (which also gives you an opportunity to map them to whatever eth interfaces (and therefore vNIC and vSwitch you want).

This 2-part video (argh – the Youtube 10 minute limit) shows how to do Step 4:

You can download a high-rez version of this video here.

Step 5: Licensing.  Ok – weird steps done (we should work on making that simpler, dontcha think? :-), let’s open the Celerra Manager GUI, and enable licenses.

This video shows how to do Step 5:

 

You can download a high-rez version of this video here.

Step 6: Configuring cge IPs and creating iSCSI targets.  Now we’re cooking.   Quickly, and easily assign IPs to the cge interfaces and iSCSI targets.

This video shows how to do Step 6:

You can download a high-rez version of this video here.

Step 7: Configuring iSCSI LUNs and presenting them to ESX as VMFS datastores.  Now we’re cooking.   Simple – use the Celerra Manager wizard to create iSCSI LUNs, then configure the iSCSI initiator in ESX and get those LUNs loaded up with VMFS and ready to rumble!

This video shows how to do Step 7:

You can download a high-rez version of this video here.

Step 8: Configuring NFS exports and presenting them to ESX as NFS datastores.  Almost there.   Easily, use the Celerra Manager to create a filesystem, export it using NFS, and discover the power and simplicity of NFS datastores in ESX.

This video shows how to do Step 8:

You can download a high-rez version of this video here.

OK – now what?

Well, you have a working shared storage config, without shared storage – so the world of advanced VMware features are at your fingertips.   You also have a fully functional (BUT NOT FOR PRODUCTION USE!!!) Celerra, one of the leading advanced unified storage platforms on the market – it is VERY powerful and has more features than you could imagine.   You can use the public support forums here: http://forums.emc.com  You’ll need a powerlink login, but you can register.   You can literally do anything you can with a real Celerra.  I will be posting more HOWTOs as well as linking to others.   Post comments/questions, and let me know neat things you do with the sim!!