پروژه نصب HP BladeSystem (بخش اول)

پروژه نصب HP BladeSystem (بخش اول)

15JAN 2015
HP BladeSystem - 02 - In plastic wrapping

Estimated reading time: 10-15 minutes.

It was two days before Christmas Eve on 2014. I was at home already slowing down for the Holidays when I got the call from a courier. Roughly 15 minutes later I was unboxing the big hard Christmas present and…OH YEAH! Christmas has come early this year — Santa (and maybe HP Finland too) has listened to our humble wishes and sent us bunch of super duper cool stuff like:

  • 1 x HP BladeSystem c7000 blade enclosure (with 10 fans and 6 power supplies)
  • 8 x HP bl460c Gen9 blade servers
  • 2 x HP VirtualConnect FlexFabric modules

…all to be used for demo purposes in PROOFMARK portal!

We already have a few blade enclosures in our demo center and all previous blade generations (from G1) but now we are talking about the meanest and the baddest, state-of-the-art Generation 9’s.

Installing and configuring all that is pretty straight forward. According to the manual at least — practice might be a bit different story. So, I decided to document the whole installation process in the form of a 2-part blog post series. And, want it or not, I will also share some thoughts about the blade server concept in general spiced with my personal opinions.

So, without any further ado — let’s definitely begin!

Unboxing

HP BladeSystem - 01 - In the package

First, I must praise the packaging for some small but extremely practical solutions for making the boxing/unboxing simple. For example the whole top of the box can be just easily slided upwards without cutting or removing anything. Also, front and rear parts of the bottom of the box can be tilted downwards for easier unboxing with no need to lift the whole enclosure up if you want to take some components out from the bottom of the enclosure.

Bravo! …to whoever designs these boxes.

HP BladeSystem - 02 - In plastic wrapping

On the other hand all the styrofoam padding has been extremely…hmm…imaginatively designed, to say the least. I have absolutely no explanation for the numerous edges, corners and pointy ends but I’m sure they have an important purpose. Maybe it’s for aerodynamics (just in case)?

Anyway, I have about one cubic meter package to unbox and to carry to the data center. It is the Holiday season already and I’m alone at the office and since our enclosure weighs about 150 kilos (unboxed) so no matter how much I think of my biceps there’s no way I can do the carrying of the whole enclosure by myself. So, the first thing to do is take as much of the boxing out of the way as possible and dismantle the enclosure to as small pieces as possible.

HP BladeSystem - 05 - Front emptied

Thankfully all the blade servers and power modules in the front and all interconnect modules (Virtual Connects) and fans in the rear come easily off. Cannibalized, that will lighten the enclosure itself to only less than 90 kilos which can be further split in half (about 50 and 40 kilos a piece) to make the carrying even easier if needed. I got help carrying the enclosure to the data center so no need for that this time. By the way, a fully populated c7000 enclosure can weigh more than 200 kilos unboxed. So you’ll probably need both your hands lifting it into a rack.

HP BladeSystem - 10 - Enclosure in the rack empty rear view

That’s the rear view of the blade enclosure already installed into the rack. From top down we have…
– 5 empty fan slots
– 2 empty interconnect module bays. Interconnect modules are meant for all kinds of switches and other “data transfer modules”. We’ll talk about these things later.
– 6 more empty interconnect module (IC) bays but these ones have the dust covers (blanks) installed. That makes a total of 8 IC module bays in one c7000 enclosure.
– Empty Onboard Administrator tray bay. That’s the brains & logic of the blade enclosure. Seems pretty “lobotomized” at the moment, eh? We’ll change that shortly.
– 5 more empty fan slots. Making it a maximum of 10 fans in a c7000.
– Finally, 6 single phase power cable connectors to be connected to power outlets.

Oh, by the way, installing the enclosure rack rails using the included rack mount kit is a walk in the park. You don’t even need any tools, screws or anything. Just extend the two-part rails to the correct length and they will snap right in place in any standard rack. Even my mom could do it. Well, not really but you get the picture.

HP BladeSystem - 07 - Components on a table

And on the table from left to right:
– 6 power supplies
– 8 brand new Generation 9 blade servers with just a tad over 1TB or RAM! We will spend a lot more talking about these bad boys later!
– 10 fans
– 2 Virtual Connect Modules and an Onboard Administrator tray (with one OA module).
– 6 power cables
– Some random accessories

There we go, all components on the table and the empty chassis (yes, that’s another name for enclosure) installed in the rack waiting for it to be repopulated with all the stolen fans, power supplies, servers, interconnect modules and Onboard Administrator modules. I’ve rolled my sleeves already so let’s go!

Fans

Let’s start with the fans. These 10 Active Cool Fans (as HP calls them) cool down almost the whole enclosure centrally, all components including the servers, interconnect modules, Onboard Administrator modules, internal circuit boards and so on. The only modules in the enclosure that have their own fans are power supplies. This is the whole beauty of blade server concept in general: we have a chassis which is much like a data center, it has four walls, a roof and a floor. Then we install some fans to the chassis to keep the chassis cool (or big cooling units in a data center) and finally we need power supplies to provide power to the whole enclosure. After that we can start carrying all the geeky stuff in!

I remember one marketing slogan that HP used back in the day describing the blade concept: “HP BladeSystem – Data Center in a box” (or something like that). I like that a lot. Because that’s what it basically is!

HP BladeSystem - 11 - Fan front

The design of the Active Cool Fans is said to be inspired by jet engines and they have some 20 patents. All I know is that they are some pretty damn powerful air movers! Anyone who has tried stressing a fully populated c7000 to the limit knows what I’m talking about.

HP BladeSystem - 12 - Fan rear

Rear view. So, jet engine inspired, huh? Yep, and they even say that if you look very carefully, you can see a faint Rolls Royce logo printed inside those blowers. Urban legend? Beats me. I’ve never seen them but that doesn’t prove anything.

HP BladeSystem - 13 - 4 fans

(My apologies for a bit blurry photo here)

You can actually get your chassis with fewer than 10 fans to save some schillings but if you do, you need to remember a few population rules. The minimum number of fans you need to have is 4 or the Onboard Administrator won’t start. AND with 4 fans you can only use 2 out of 16 blades. So, you have paid for the mighty 16-slot blade chassis but you decided only to use 2 blades? OK…why? I really can’t think of any good reason to go for 4 fans. Nevertheless, if you do, you must populate the fans in bays 4, 5, 9 and 10 so the rightmost bays. That’s because you’d start populating the servers in the front from left to right.

HP BladeSystem - 14 - 6 fansHP BladeSystem - 15 - 8 fansHP BladeSystem - 16 - 10 fans

Here you can see the rest of the recommended best practice fan configurations: 6, 8 and 10 fans. With 6 fans you can have one half (to be specific: left half) of the blades running and with 8 or 10 fans all 16 blades can be run simultaneously (the way it’s meant to be).

Using 10 fans also gives you one extra edge in the form of redundancy: you can lose 2 fans and still have the whole enclosure up&running.

Our chassis came with all fans installed, so it’s pretty straight forward to install them in correct bays.

OK, fans are in. Next up, the brains of the chassis: Onboard Administrator (or friendly “OA”) modules.

Onboard Administrator

As mentioned before Onboard Administrator (OA) is the management module of the whole chassis. You can use OA to set the IP addresses of all the components in the chassis, define power modes, boot-up sequences, e-mail notification settings and a ton of other things. You can access OA either thru GUI (web browser), built in LCD display (called Insight Display) or Command Line Interface.

HP BladeSystem - 17 - OA tray module and blank

The OA hardware entity consists of a couple of different components: OA tray (in the back), OA module itself (front left) and dust cover (in case you only have one OA module). In most production configurations, you’d always have two OA modules for redundancy (side note: wish I had a redundant pair of brains on some certain mornings) but since our chassis is purely for educational and demonstration purposes, we can manage with one OA module and even tolerate a loss of that.

Actually, the chassis can run without the OA modules completely. Can’t boot without, but if all the OA modules fail while the enclosure is up&running, all the fancy optimization logic is gone, removed, head shot and the enclosure falls into survival mode; it makes all the fans blow at warp speed, doesn’t enforce any power limitations and most importantly, makes all the LED’s go to David Guetta mode. It’s fun to watch. Then, when you reinstall the OA modules everything immediately goes back to (boring) normal.

HP BladeSystem - 18 - OA Enclosure link ports

That’s a close-up of the OA tray. In the middle there’s a couple of standard RJ-45 ports. They are called Enclosure Interlinks and they are used to…well, link enclosures together. This way, when you connect to one of the OA modules you can manage all linked enclosures. Handy! The maximum number of enclosures you can link together is, unfortunately, only 4.

HP BladeSystem - 19 - OA module ports

OA module itself. Ports from left to right are:
– iLO port. Used to connect to the OA itself plus all the blade servers’ Integrated Lights-Out management chips. So, no 16 separate network cables (as the situation would be with 16 rack mounted servers) but only one.
– USB port for updating the enclosure firmware, uploading/downloading configuration and mounting ISO images as optical drives to the blade servers.
– Serial port. Nuff said? =D Well, not much used anymore. Mostly due to the fact that for example I’d  have to go to some used computer store to first buy a computer that has a serial port and then to another used computer store to buy a serial cable.
– VGA port for KVM (Keyboard Video Mouse) capabilities since the blades themselves don’t have those ports. Well, actually they kinda do through a special adapter but that’s cheating. Much like my Macbook Air’s Thunderbolt port. “Sure, you have all the ports in the world available”, said the Apple Genius. “Just 49,95€ per port”, the Genius continued.

HP BladeSystem - 20 - OA tray out

The OA tray is located just beneath the interconnect modules and takes the whole width of the enclosure. You first need to install the tray and only after it is securely installed, you can install the OA modules. The same goes other way around: you can not remove the OA tray without first removing both of the OA modules.

See those purple handles? You first push the module from the BODY all the way deep into the enclosure and THEN use that handle just to lock the module in place. NOT to push the module in. Approximately 330 service requests saved there. You can thank me later, HP. ?

HP BladeSystem - 21 - OA tray in place, OA module out

OA tray in place, next up the OA module itself. We know the drill already.

HP BladeSystem - 22 - OA installed

There you go. OA tray, OA module and the dust cover all installed and ready for action. Next, Virtual Connects.

Virtual Connect modules

HP Virtual Connect for DummiesWell, well, well…where to begin. Virtual Connect is one of my favourite topics to talk about with blades but it’s also so DAMN hard to explain simply and quickly. But at the same time definitely one of the coolest things data center computing has seen for he past 10 years.

I’m not gonna start lecturing you about Virtual Connect (now) so if you are not very familiar with Virtual Connect, I can warmly recommend this one exceptionally well-written introduction book called HP Virtual Connect for Dummies. It explains all the basic concepts of server-edge virtualization, purpose and advantages of Virtual Connect, different VC modules etc in a very enjoyable fashion and the best part is, it’s only some 60 pages! So, you can easily read it during a summer holiday. What? That’s reasonably fast for me.

HP BladeSystem - 23 - VC-FF front without SFPs

This is how a 24-port HP Virtual Connect FlexFabric module looks from the uplink side. We have a total of 8 ports and the first 4 ports can be selected to function either in FC or in Ethernet mode, the last 4 ones are fixed for Ethernet. So, it is pretty much as close to convergence we currently can be with the existing standards. And DON’T get me started with FCoE/CEE/DCB n’ stuff. We’re not there yet. Soon, but not yet.

And that weird looking white piece of paper on top of the module is just a sticker with default Administrator passwords, MAC addresses etc. You should stick it somewhere with all the other important papers you have. Just in case.

HP BladeSystem - 24 - Enet and FC SFPs

A couple of so called transcievers or SFPs that we need to plug into the empty port slots in the VC-FF module. These ones happen to be 1Gbit and 8Gbit versions. You can also use standard 1Gbit RJ-45 ports if you feel like it, no problem.

HP BladeSystem - 25 - VC-FF front with SFPs

A couple of SFPs installed in place. We are going to use the 4 leftmost ones for FC connectivity to our brand new 3PAR 7200c storage arrays and the 4 rightmost ports are dedicated for Enet.

HP BladeSystem - 26 - VC-FF lid open

And this is an internal view of a Virtual Connect FlexFabric module for all of you who are interested in this kind of stuff. Not much to say here but: “Boy, that’s a lot of fancy stuff in a small space!”

HP BladeSystem - 27 - VC-FF rear

A rear view of a Virtual Connect module. This is how all IC modules look like from the rear; no matter if we are talking about Virtual Connects, SAS switches or simple pass-thru modules, the way they connect to the signal midplane internally within se chassis is thru this 180-pin port that handles all the traffic from/to all 16 servers in the enclosure.

HP BladeSystem - 28 - VC module in bay 1 installed

Those first two adjacent interconnect bays are reserved for our Virtual Connect FlexFabric modules. Whatever modules are installed in IC bays 1 and 2 always connect to the default ports on all the 16 blades in the chassis. So, make sure your interconnect modules match those of the blade ports. FC modules don’t communicate very well (read: at all) with Ethernet ports so, careful.

Installation of an interconnect module is pretty much the same as the installation of an OA module: first push the module far in from the body, then lock it in place pushing the purple handle in.

HP BladeSystem - 29 - Both VC modules with SFPs installed

There, both VC-FF modules installed with all the SFPs we’re going to need. The rest of the interconnect bays are reserved for expansion. To use those 6 bays you need to have an expansion card, called mezzanine card, installed in the blades. Expansion cards can be for example 2-port FC cards, 4-port Enet cards or something else. Then, depending on the slot you have the mezzanine installed in the blades, you need to use a matching IC module in the back.

You can refer to one of the several port mapping documents in the web for learning more about c7000. Here is at least one quick and simple explanation about c7000 port mapping.

That’s more or less the rear of the chassis covered, now onto the front side component.

Power Supply modules

We can use a maximum of 6 x 2650W power supplies with a c7000 enclosure. That’s a whopping 15,9kW of total power! More than three times what the heater in my sauna can produce! Hmm, maybe I should swap my current heater for a c7000 blade enclosure…would be super cool. And also a bit disturbing that I find it cool.

HP BladeSystem - 31 - Power supply

That’s how a c7000 power supply looks like. It is pretty long-shaped and as mentioned before, has it’s own built-in cooling system. These power supplies are the only components in the enclosure that the 10 Active Cool fans don’t cool down.

HP BladeSystem - 32 - 1 PS

That’s power supply #1 going in. Once again, from the body all the way in, then locking it using the purple handle.

Slot numbering is pretty straight forward: from left to right, 1 to 6. But the best practice population is not. First power supply goes into slot #1 (as in the above picture), the next one goes to slot #4, then slot 2, 5, 3 and finally 6. You can think of the power supplies as two separate “clusters”: left side (slots 1, 2 and 3) and right side (4,5 and 6). Then you simply start populating both clusters from left to right.

Oh, by the way, the LCD display (Insight Display) on he bottom in front of the PS slots slides out of the way horizontally if you need to touch power supplies 3 or 4. That Insight Display is one way of managing OA module.

HP BladeSystem - 33 - 6 PS

We have all the six power supplies so once again, installation is pretty easy. Here you see a fully populated power supply system – All 6 power supplies installed and the Insight Display in front of PS 3 and 4.

So, we have most of the components already installed but something is still missing – that’s right, the blade servers themselves. We will finish the blade infrastructure installation project up with my next post “Installation Project: HP BladeSystem (part 2)”.

Thanks for reading and see you with the next and final part!

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *