مفاهیم سرورهای Blade

مفاهیم سرورهای Blade


Blade
solutions provide increased server density better power and cooling
efficiency and more flexible management functions when compared with
standalone servers. They require a fair bit of investment which is why
they aren’t a good choice if you only need to deploy a couple of
servers.

What is a blade?

A
blade server is an ultra-compact server designed to be
installed within a special chassis which is responsible for supplying
support infrastructure to blades via a backplane connection. Blade
servers do not have their own power supplies or cooling solutions
(including fans) as these are supplied by the chassis. To give you an
idea of the densities possible in a server chassis the figure on the
right
shows 64 physical servers installed in a standard 42RU rack in
the HP blade system. This can be increased to 128 servers if you use
the ProLiant BL2x20 dual-server half-height  blades.

Due to
their size only a couple of hard disk drives can be physically
installed in each blade server – though in most cases drives are still
hot-swappable allowing for easy replacement in the event of failure.
There are no physical PCI-X or PCIe slots. PCI Express-based mezzanine
cards are used to provide expandability. These cards provide an
interface between the blade server and chassis backplane in order to
provide various networking services including Ethernet InfiniBand (an
interface that is frequently used in cluster applications) and fibre
channel.

The drawback to using mezzanine cards is that cards of
like technology must be used in corresponding mezzanine slots. For
instance if you install a dual-gigabit Ethernet card in mezzanine slot
A on one server slot A on all other servers must feature Ethernet
cards or no card at all. However there’s nothing stopping you from
installing a fibre-channel card in slot B in cases like this. This is
due to how each blade interfaces with the chassis backplane – to ensure
that one type of interface always connects through to an appropriate
interface module.

In general all mezzanine cards feature two
ports which are routed through to separate interface module slots for
redundancy. Interface modules are discussed in greater detail below.

Most
manufacturers also sell expansion blades which can be linked to blade
servers to provide additional services. For instance:

  • Storage blades allow for the installation of additional hard disk drives
  • Tape blades provide an LTO drive in order perform data backups
  • PCI Express blades allow you to install PCI Express cards

In general expansion blades can only be used by a server installed in an adjacent location within the chassis.
Blades
solutions are not standardised which means that blades made by one
manufacturer cannot be used in another manufacturer’s chassis or even
from different class chassis from the same vendor. Not only does the
physical size of each blade change between different classes of blades
but the backplane interconnects are often different.

Most
vendors sell half-height blades in addition to full-height variants.
The larger physical volume of full-height blades is typically used to
install additional CPU sockets (often four sockets are available in
these servers as opposed to two) additional memory slots or the
ability to utilise more mezzanine cards. HP even has a dual-server
half-height blade that essentially doubles server density.

The Chassis

At
the core of any blade deployment is the chassis. Chassis range in size
from 6 to 12 RU depending on the make and model with smaller chassis
generally targeted for smaller-scale deployments – such as the medium
enterprise. Depending on your choice of chassis you can fit up to 64
servers in a standard 42RU rack (or 128 servers if using the
dual-server blade mentioned above). Compare that to a maximum density
of 42 servers with standalone units and you can see why blades can
minimise deployment space.

The chassis is generally the sole
component that is physically mounted in a rack – all other components
including power supplies fans management and interface modules and of
course the blades themselves.

Power Supplies

For a
minimum complement of blade servers a chassis requires only a single
power supply (though this provides no fault tolerance). As the number
of blades installed in a chassis increases additional power supplies
are often added in order to provide sufficient power to all blades and
to balance load. If power supplies are installed in this manner it’s
said to be in an “n” configuration (where there are only enough
supplies to deliver power to the components in the chassis with no
redundancy).
The danger with installing supplies in this manner is
that if a single supply fails the chassis will not be able to supply
enough power to keep all components running (though the management
modules of the chassis can often be configured to ensure that certain
servers are powered down before others in the event of a supply
failure). An extra power supply can be installed to provide an “n+1”
configuration – which generally ensures that all systems can remain
running in the event of a power supply failure.

But what happens
if power to an entire circuit is cut? To protect against this scenario
power supplies must be installed in redundant pairs (an “n+n”
configuration) where each pair is connected to a different power
circuit.

Most chassis are designed to be able to support
enough power supplies so that when installed in an n+n configuration
sufficient power can be supplied to an enclosure that is fully
populated with blades running at full (or close to full) power. Four
supplies are sufficient for smaller enclosures while the larger
enclosures require six supplies to maintain n+n configurations.
In
general the power supplies are plugged into the chassis backplane
with power cords are generally connected (or hard-wired) to the back of
the chassis. This allows the power supplies to be easily swapped out in
the event of failure.

Smaller chassis features IEC 320 C13
plugs – much like regular PCs allowing them to be easily plugged in to
regular mains supply. Larger chassis utilise IEC 320 C19 plugs. While
these plugs look similar they require a 20A feed (rather than the 10A
that is supposed to be able to be safely supplied via mains). The
installation of these larger enclosures often requires additional
electrical work during the installation phase. Some enclosures have the
option of being hard-wired to three-phase feeds. The advantage to using
three-phase is that the power supplies can convert this supply to DC
more efficiently that with single-phase supply meaning that less power
is wasted during conversion and more power can be supplied to the
blades.

Fans

Neither the blades nor the other components
have their own cooling solutions. Instead fan modules are installed in
key locations to draw air through all components. In turn each
component is designed with cooling in mind – for instance in server
blades RAM modules CPUs and baffles are aligned in order to optimise
the flow of air to assist with cooling.

Each fan module
usually consists of two fans in series providing redundancy in the
event that one fan fails. Much like power supplies fan modules are
hot-swappable and a minimum complement of fans is required when only a
few blades are installed within the chassis.

Interface Modules

Because
blade servers are inserted into the chassis it’s not possible for you
to simply plug-in into the back of a blade a network cable for
example. All PCIe devices (including mezzanine cards and integrated
network cards) are connected from the server through the server
backplane and through to an interface module slot.

Just about
all blade servers have dual integrated gigabit Ethernet NICs on-board.
One of these ports is almost always routed through to the first
interface module slot while the other port is routed through to the
second. The first port on mezzanine slot A is usually routed to the
third interface module and the second port on mezzanine slot A
generally goes to the fourth module. And so forth for all additional
mezzanine slots supported by the blade system in use.

The major
advantage to having each port on a given integrated/mezzanine
controller routed to a different interface module slot is redundancy.
Take Ethernet for example: two separate Ethernet switching modules
could be installed and teaming utilised on the server so that if one
switch fails the server would remain connected to the network.
Interface
modules are generally installed in pairs in order provide
fault-tolerance. However there is generally nothing stopping you from
only installing single modules if you don’t require this level of
redundancy.

There’s also great variety in the modules that are
available for installation in these modules. In the Ethernet space
pass-through modules essentially allow you to plug a cable directly
into the back of each blade. Ethernet switches are frequently used
providing an internal connection for every blade server as well as a
number of uplink ports that are physically located on the back of the
module.

Often third party network providers will manufacture
network and fibre-channel switching modules for the blade manufacturer.
Cisco for example supplies Ethernet switching modules for IBM Dell
and HP blade systems which feature the same CatOS software that ships
on their own switches. Nortel also offers Ethernet switches for some
blade systems while Q-logic and Brocade provide fibre-channel
switching modules and TopSpin provide InfiniBand solutions.

Lastly
HP (Virtual Connect) and IBM (Open Fabric Manager) both provide
virtualised interface modules which record details about server
configuration based on the roles it performs. This allows you to remove
a blade and plug it back in to a different location (or even different
chassis) the modules will automatically reconfigure themselves to use
the same settings that were originally set up. Because configuration is
role-based additional (or hot-spare) servers can be easily added to
the blade infrastructure without configuration. This is often
invaluable when blades servers are configured to boot from iSCSI or SAN
locations as it allows you to plug in a server and get it running
straight away – without having to mess around with fibre-channel zoning
or Ethernet VLANs.

Management Modules

The final piece of
the blade system jigsaw is the management module. These provide a
single interface to manage all aspects of the blade chassis as well as
allowing passthrough to the IMPI interfaces on each blade server. Some
chassis allow for the installation of redundant management modules
providing fault-tolerance. Some blade systems even allow for the
daisy-chaining of numerous chassis so that you can manage servers from
multiple chassis from the one interface.
The management modules
perform most of the lights-out management features that are available
on standalone servers plus chassis-specific functions including power
and cooling management functions.

Pros and Cons

Like any
break in convention there are positives to be had as well as
negatives. Ultimately choices come down to whether the plus points
outweigh the minus ones. On a personal note the primary service that
my employer sells runs on a mixture of blade and standalone servers. We
take advantage of the density manageability and efficiency advantages
of blade for a range of server functions including database and file
server clustering transcoding servers and web servers. On the other
hand we utilise standalone encode server because the PCI Express
encode cards that we use cannot be easily installed in a blade system.

Blade
systems attempt to minimise cost – provided that you’re installing
enough servers in order to recoup the initial outlay of purchasing the
associated chassis infrastructure. If you only need three servers then
blades aren’t for you. It’s much cheaper buying standalone servers and
your own networking and SAN gear not to mention more scalable and
flexible. However if you’re buying eight high-availability servers
the money that you would otherwise spend on network and SAN switching
would offset the cost of your interface modules.

Blade systems
attempt to minimise power consumption – fewer power supplies generally
means for efficient conversion from AC to DC. And because load
(therefore power consumption) varies at different times on different
servers power management schemes can ensure that power supplies are
running in their most efficient mode. The negative side of this is that
the power supplies are larger in order to support maximum loads. As a
result additional costs are incurred at deployment when dedicated
circuits have to be installed to power the chassis. In some cases it
may not be possible to install a blade infrastructure because of this
requirement.

Blade systems attempt to minimise support time by
consolidating the management functions of numerous servers into a
single interface though this positive is often overlooked by
purchasing managers. Blade systems also attempt to simplify change with
the use of virtual interface modules at high cost. Blade systems also
attempt to increase server density by combining multiple
infrastructure elements into a single chassis at the cost of
flexibility in what infrastructure can be deployed.

  1. A
    full-height blade server (HP ProLiant BL485c) installed across blade
    slots 1 & 9. In order toaccommodate the server a horizontal
    divider must be removed from the chassis. As a result an adjacent
    half-height blade would have to be installed in slot 10 before one can
    be installed in slot 2. This server supports up to two dual-core AMD
    Opteron processors 128GB of memory four hard disks and three
    mezzanine cards in addition to the integrated dual gigabit Ethernet.
  2. A
    half-height blade server (HP ProLiant BL460c) installed in blade slot
    3. This server supports two quad-core Intel Xeon processors showing
    that AMD and Intel architectures can be used within the same chassis.
    64GB of RAM is supported along with two mezzanine cards to supplement
    the integrated dual Gigabit-Ethernet.
  3. Linked to the BL460c server in the adjacent slot is a SB920c Tape Blade featuring an LTO-3 Ultrium tape drive.
  4. Unoccupied blade slots are filled with blanking plates in order to maximise cooling performance.
  5. Another half-height blade server (HP ProLiant BL460c).
  6. This
    time the adjacent blade server is associated with the SB40c storage
    blade which provides the capability to add another six small form
    factor disks on a separate SAS controller to the adjacent server.
  7. Modular power supplies plug in to the front of the chassis.
  8. An
    LCD control panel allows administrators to perform a range of functions
    from the chassis itself including adjusting the IP configuration of
    the Onboard Administrators performing firmware updates and more.

  1.  A
    hot-swappable fan. Fans are installed in pairs – one at the top and one
    at the bottom. If all blades are installed in one side of the chassis
    fans only have to be installed in the corresponding side for efficient
    cooling operation.
  2. Blanking plates are also installed in slots where fans are not present again to improve cooling efficiency.
  3. HP
    1/10Gb Virtual Connect Ethernet modules are installed in interface
    slots 1 & 2. This specific module features two external 10GBase-CX4
    ports providing 10Gb/s speeds and eight external 1000Base-T ports
    providing Gigabit-ethernet speeds. The Ethernet interfaces on all blade
    servers can utilise these external uplink ports.
  4. A pair
    of 4Gb/s Fibre Channel Passthroughs are installed in interface slots 3
    & 4. Essentially each port connects directly to a fibre channel
    mezzanine care installed in the corresponding server (assuming one
    exists). This is the closes that you’ll get to plugging a cable
    directly into the back of a blade server.
  5. Interface
    slots 5 & 6 are populated with Cisco 3020 switches – an example of
    the third-party modules that are available for the HP blade system.
  6. Blanking
    plates are inserted in unused slots.The onboard administrator modules
    control power and cooling management functions as well as provide
    lights-out management to all servers within the chassis. By plugging in
    a single Ethernet cable to the iLO port you can access the IPMI
    interfaces on all servers in the chassis. A redundant onboard
    administrator is also installed providing fault tolerance.
  7. Enclosure
    interlink ports allow you to manage servers in other chassis from
    onboard administrators installed in this chassis. Each chassis can
    interconnect with up to three others.
  8. This chassis utilises IEC320 C19 power cords – which require 20A feeds. Hard-wired three phase versions are also available.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *