Skip to Content

آرشیو

نرم افزار HP OneView

HP OneView

HP OneView

The HP OneView appliance provides software-defined resources, such as groups and server profiles, enabling you to capture the best practices of your experts across a variety of disciplines, including networking, storage, hardware configuration, operating system build, and configuration. By having your experts define the server profiles and the networking groups and resources, you can reduce cross-silo disconnects. System administrators can provision and manage thousands of servers without requiring that your experts be involved with every server deployment.

 

What is slowing your data center management? Too many process steps? Non-standard, manual tasks? Over-committed experts? An ever-expanding project backlog? HP OneView is powerful converged management that reduces infrastructure complexity with automation simplicity. Its software-defined approach can help your IT teams capture their best practices for “get it right” repeatability every time.The open ecosystem of HP OneView allows easy integration with other management products, including HP, VMware, Microsoft, and Red Hat solutions. HP OneView automates the delivery and operations of IT services, transforming everyday management of server, storage and network resources in physical and virtual environments.

This innovative platform reduces OpEx and improves agility to free your resources to focus on new business initiatives. HP OneView supports lights-out automation and provides a simple, fast, and efficient path to Infrastructure-as-a-Service and to get you to hybrid cloud.

What’s new

  • HP OneView server profile templates let you define configurations once, in minutes, and then provision or update the configuration many times – consistently and reliably.
  • Snapshot for 3PAR StoreServ volumes to save and revert to an earlier point in time or to create a volume clone.
  • Proactively identify SAN health issues where the expected and actual connectivity and states differ.
  • HP Virtual Connect supports Quality of Service and Dual-Hop FCoE.
  • HP OneView provides server profile mobility across different adapters, different generations and different blade models.
  • The HP Virtualization Performance Viewer for HP OneView helps you monitor and quickly troubleshoot performance issues, optimize physical and virtual compute, and forecast required capacity.

Features

Converged Management

HP OneView delivers a unified management platform that supports HP ProLiant Rack servers, HP BladeSystem, HP 3PAR StoreServ Storage, and HP ConvergedSystem 700 platforms.
The innovative HP OneView architecture is designed for converged management across servers, storage, and networks. The unified workspace allows your entire IT team to leverage the ‘one model, one data, one view’ approach. This streamlines activities and communications for consistent productivity.
HP Smart Search provides fast, sophisticated search to instantly find the exact information for which you look. And HP Activities gives you a common communication stream of real-time messages (like alerts, team messages, and updates) to reduce ad hoc processes and team productivity inhibitors.
HP MapView lets you see resource status, relationships and dependencies. Visualizing connections and relationships between resources provides fast collaborative troubleshooting and reduces operational errors.
Dashboard views can be customized to display your preferred data. Standard displays show every device in your data center, from anywhere, with a fully-mobile client. Tasks and alerts are shown, along with more information as needed for more specific areas of interest.

Software-Defined Control

Software-defined approaches are designed into HP OneView to empower your management experts to create resource definitions once (for servers, storage, and networks), and then to roll out new resources — rapidly, repeatedly, and reliably.
Fully-automated SAN zoning, including Brocade fabrics, allows you to attach/detach SAN volumes to server profiles without manual zoning. You can also use direct attach (Flat SAN) profiles between HP Virtual Connect and 3PAR StoreServ Storage.
Network management using HP Virtual Connect supports Fibre Channel and FlexFabric interconnects with other advanced features to enhance productivity. A new migration tool helps transition existing users to HP OneView. MapView allows viewing of some typical switches in non-Virtual Connect networks.
Server profile support for HP ProLiant Blade and Rack Servers (now with Gen9 support) automates firmware maintenance, boosts productivity, and saves time. Software-defined approaches capture your best practices by your experts into common templates and processes for leverage by your wider team.
The state change message bus and REST APIs provide automation and a closed-loop method of providing compliance. This allows virtualization administrators to automate control of HP compute, storage, and networking resources without detailed knowledge of each device.

Automated in Your Environment

The open ecosystem of HP OneView provides easy integration with enterprise software solutions and with custom user solutions. Standard integrations are provided for HP Software, VMware, Microsoft, and Red Hat. Python and Powershell scripts are available to help custom integrations.
HP Operations Analytics for HP OneView is an optional capability that provides ‘Big Data’ analytics for IT operations. You can troubleshoot your converged infrastructure, view the overall data center health, and predict your infrastructure capacity limits.
HP OneView provides standard integrations for VMware vCenter Server, Operations Manager, and Log Insight. Server-profile-based deployment and cluster expansion can create and grow clusters using automated workflows which leverage 3PAR shared storage and/or boot from SAN.
HP OneView for Microsoft System Center is a standard integration that enables server-profile-based deployment and Hyper-V cluster expansion, cluster views, and Fibre Channel / FCoE information. Clear relationships are also shown for health monitoring and alerting.
Full programmability allows users to easily integrate, automate, and customize HP OneView for their own use. The REST APIs, consistent data model, and state-change message bus allow control over management functions and pulling of custom data– and its use as an intelligent automation hub.

 

ادامه مطلب

استوریج QNAP TVS-1271U-RP NAS

The QNAP TVS-1271U-RP is a 4th generation NAS storage solution designed for data backup, file synchronization, and remote access. Ideal for SMB use cases, it also features cross-platform file sharing, a wide-range of backup solutions, iSCSI and virtualization scenarios, as well as all kinds of practical business functions. The QNAP also includes abundant multimedia applications and a wide selection of different component options, all backed by impressive hardware specifications. The TVS-1271U-RP is also a highly scalable solution as it can support up to 1,120TB of raw capacity using multiple QNAP RAID expansion enclosures.

The TVS-1271U-RP is compatible with SATA 6Gbps drives, with QNAP quoting their NAS to deliver over 3,300MB/s in throughput and 162,000 IOPS. QNAP has also powered their TVS-1271U-RP with an Intel Haswell processor, including Pentium, Core i3, Core i5 and Core i7 options, which gives users the flexibly build their NAS based on individual needs. QNAP indicates that this will help to improve the efficiency of CPU-consuming tasks while serving more simultaneous tasks at once. To help boost IOPS performance, the TVS-1271U-RP supports two on-board internal cache ports, which can be equipped with the optional mSATA flash modules. In addition, QNAP offers an internal cache port design that will not use the space of a hard drive tray, which further increases the storage capacity of the TVS-1271U-RP.

Like all QNAP NAS solutions, TVS-1271U-RP is managed by the QTS intuitive user interface. Leveraging the latest version 4.2, this intelligent desktop allows for easy navigation and a ton of new features and enhancements. Users can also create desktop shortcuts or group shortcuts, monitor important system information in real-time, and open multiple application windows to run multiple tasks concurrently.

QNAP TVS-1271U-RP Specifications

  • Form Factor: 2U, Rackmount
  • Flash Memory: 512MB DOM
  • Internal Cache Port: Two mSATA port on board for read caching
  • Hard Drive: 12 x 3.5-inch SATA 6Gb/s, SATA 3Gb/s hard drive or 2.5-inch SATA, SSD hard drive
  • Hard Disk Tray: 12 x hot-swappable and lockable tray
  • LAN Port: 4 x Gigabit RJ-45 Ethernet port
  • (Expandable up to 8 x 1 Gb LAN or 4 x 10 Gb + 4 x 1 Gb LAN by installing optional dual-port 10 Gb and 1 Gb network card)
  • LED Indicators: Status, 10 GbE, LAN, storage expansion port status
  • USB/eSATA:
    • 4x USB 3.0 port (rear)
    • 4x USB 2.0 port (rear)
  • Support: USB printer, pen drive, USB hub, and USB UPS etc.
  • HDMI: 1
  • Buttons: Power button and reset button
  • Alarm Buzzer: System warning
  • Dimensions:
    • 89(H) x 482(W) x 534(D) mm
    • 3.5(H) x 18.98(W) x 21.02(D) inch
  • Weight:
    • 16.14 kg/ 35.58 lb (Net)
    • 18.98 kg/ 41.84 lb (Gross)
  • Sound Level (dB):
    • Sound pressure (LpAm) (by stander positions): 45.0 dB
    • (with 12 x HITACHI HUS724020ALA640 hard drive installed)
  • Power Consumption (W)
    • HDD Standby:
      • TVS-1271U-RP-PT-4G: 88.88
      • TVS-1271U-RP-i3-8G: 87.89
      • TVS-1271U-RP-i5-16G: 88.91
      • TVS-1271U-RP-i7-32G: 89.82
    • In Operation:
      • TVS-1271U-RP-PT-4G: 173.38
      • TVS-1271U-RP-i3-8G: 176.27
      • TVS-1271U-RP-i5-16G: 174.64
      • TVS-1271U-RP-i7-32G: 176.42
      • (with 12 x WD WD20EFRX hard drive installed)
  • Temperature: 0~40˚C
  • Relative Humidity: 5~95% non-condensing, wet bulb: 27˚C.
  • Power Supply
    • Input: 100-240V~, 50-60Hz, 7A-3.5A
    • Output: 500W
  • PCIe Slot: 2 (1* PCIe Gen3 x8, 1* PCIe Gen3 x4)
  • Fan: 3 x 7 cm smart cooling fan

Design and Build

Like all QNAP NAS solutions, the TVS- 1271U-RP has a fairly basic design with its all-metal chassis. There’s not too much to say about the front panel, as the vast majority of space is taken up by the 12 drive-bays (12 x 3.5-inch SATA 6Gb/s, SATA 3Gb/s hard drive or 2.5-inch SATA, SSD hard drives). To the far right is the power button and the Status, 10 GbE, LAN, storage expansion port status indicators.

Turning it around to the back panel shows a host of connection functionality and other features. On the front and center are the eight USB ports, four of which are 2.0 while the remaining four are 3.0. Just to the left is a tiny Password & Network Settings Reset Button while above are the four Gigabit LAN ports. An HDMI port is also located near the group with two redundant power supplies on the far right.

In addition, two expansion slots are visible (when not occupised with a 10G expansion card), allowing the unit to expand up to 1,120TB in raw capacity using a total of 140 hard drives with 8 expansion units. This is ideal for growing businesses and those that leverage a ton of data every day, such as with video surveillance, data archiving, TV broadcast storage, and other large-data applications.

The TVS- 1271U-RP supports the ANSI/EIA-RS-310-D rack mounting standards.

Testing Background and Comparables

We publish an inventory of our lab environment, an overview of the lab’s networking capabilities, and other details about our testing protocols so that administrators and those responsible for equipment acquisition can fairly gauge the conditions under which we have achieved the published results. None of our reviews are paid for or overseen by the manufacturer of equipment we are testing.

We tested the QNAP TVS-1271U-RP with the following drives in iSCSI block-level and CIFS file-level tests:

Application Performance Analysis

Our first benchmark of the QNAP TVS-1271U-RP is our Microsoft SQL Server OLTP Benchmark that simulates application workloads similar to those the QNAP TVS-1271U-RP and its comparables are designed to serve. For our application testing we are only looking at the Toshiba HK3R2 SSDs.

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments. Our SQL Server protocol uses a 685GB (3,000 scale) SQL Server database and measures the transactional performance and latency under a load of 15,000 virtual users.

Looking at the TPS performance for each VM, all were configured identically and performed well, with little disparity between them. The average overall performance was found to be 2,894 TPS. The difference between the he top performer, VM2 at 2,912.6 TPS, and the lowest performer, VM4 at 2,876.8 TPS, was 35.8 TPS.

When looking at average latency of the same test, results were mirrored; however, there was a bit more disparity between the configurations. The average was set at 441.0ms. The top performer, VM2 with a latency of 409.0ms, was only 62ms lower than the highest latency VM, VM4, with 471.0ms.

Our next set of benchmarks is the Sysbench test, which measures average TPS (Transactions Per Second), average latency, as well as average 99th percentile latency at a peak load of 32 threads.

In the average transactions per second benchmark, the TVS-1271U-RP gave us a performance of 2,047 TPS aggregate performance.

In average latency, we measured 64ms across all 4 VMs, with the spread being from 55ms at the lowest to 77ms at the highest, a difference of 22ms.

In terms of our worst-case MySQL latency scenario (99th percentile latency), the QNAP measured 327ms averaging all four 4 VMs.

Enterprise Synthetic Workload Analysis

Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks as well as widely-published values such as max 4k read and write speed and 8k 70/30, which is commonly used for enterprise systems.

  • 4k
    • 100% Read or 100% Write
    • 100% 4k
  • 8K (Sequential)
    • 100% Read or 100% Write
  • 8k 70/30
    • 70% Read, 30% Write
    • 100% 8k
  • 128k (Sequential)
    • 100% Read or 100% Write

In the first of our enterprise workloads, we measured a long sample of random 4k performance with 100% write and 100% read activity to attain results from for this benchmark. In this scenario, the QNAP populated with SSDs recorded 135,464 IOPS read and 70,246 IOPS write when configured in iSCSI while CIFS connectivity saw just 22,588 IOPS read and 61,433 IOPS write. For comparison, the HDD configuration posted 10,524 IOPS and 4,788 IOPS (read and write, respectively) when configured in iSCSI.

As expected, the average latency benchmark results were much closer in performance. Here, the QNAP populated with SSDs posted an impressive 1.89ms read and 3.64ms write (iSCSI), whereas the HDD configuration posted 24.31ms read and 53.49ms write (iSCSI). As shown in our chart below, there was a large read latency spike when the HDDs were configured in CIFS (398.46ms).

Looking at results of the max latency benchmark the QNAP NAS populated with SSDs showed the best read performance (71.8ms/CIFS); however, it showed a huge spike in writes with 6,673.4ms. The best configuration for maximum latency in writes was the QNAP populated with HDDs using CIFS.

The SSD configurations in both iSCSI and CIFS showed good consistency, with 1.66ms read and 5.15ms write in iSCSI and 9.22ms read and 22.49ms write in CIFS. As far as the HDD configurations of the QNAP NAS go, our iSCSI block-level protocol showed the best standard deviation read latency while CIFS showed the best writes.

Our next benchmark measures 100% 8K sequential throughput with a 16T/16Q load in 100% read and 100% write operations. Here, the performance of all drives improved substantially, with the QNAP NAS configured in HDDs via CIFS showed the top read activity by a noticeable margin (158,454 IOPS). The QNAP NAS configured in iSCSI posted the top write performance (135,251 IOPS).

Compared to the fixed 16 thread, 16 queue max workload we performed in the 100% 4k write test, our mixed workload profiles scale the performance across a wide range of thread/queue combinations. In these tests, we span workload intensity from 2 threads and 2 queue up to 16 threads and 16 queue. The SSDs configured in iSCSI via the QNAP NAS posted the highest results; however, the iSCSI protocol had the least consistent results (though it reached the 80K IOPS mark). The QNAP HDD configurations had substantially lower IOPS, particularly in our CIFS file-level test.

Results were mirrored when looking at average latency, though the QNAP configured in with SSDs had much more stables results in iSCSI. The HDDs configured in iSCSI also outperformed CIFS for this benchmark, which had significant latency spikes.

Performance was a much more erratic overall from all of the QNAP NAS configurations tested in our max latency benchmark though iSCSI and CIFS were more in line with each other than in previous benchmarks. Overall, the QNAP NAS configured with SSDs in iSCSI showed the best max latency results.

The results of the standard deviation benchmark were virtually identical to the results of the average latency benchmark, with iSCSI outperforming CIFS in both HDD and SSD configurations.

Our last test in the Enterprise Synthetic Workload benchmarks looks at 128k large block sequential performance, which shows the highest sequential transfer speed the QNAP drive configurations. Looking at the 128k performance of 100% write and 100% read activity, all drives and configurations posted similar write numbers, all of which boasted near 2,200,00KB/s mark in writes. The QNAP NAS populated with SSDs and configured in iSCSI had the best read and write performance with 2,304,102KB/s and 2,285,056KB/s, respectively.

Conclusion

One of biggest assets of the QNAP TVS-1271U-RP is its ability to scale as businesses grow combined with ease of use. With a massive storage pool of to 1,120TB in raw capacity via multiple QNAP RAID expansion enclosures, the TVS-1271U-RP is able to satisfy almost any needs. On the hardware side, TVS-1271U-RP is able to leverage SATA 6Gbps drives, which provides a good combination of deployment options with affordable storage media. QNAP ships the TVS-1271U-RP with an Intel Haswell processor, which is available in Pentium, Core i3, Core i5 and Core i7 options. Coupling all of this with the integrated QTS management interface allows the TVS-1271U-RP NAS boast a mountain of flexibility and thus ideal for a wide variety of applications.

We tested the NAS with both HDDs and SSDs. We saw upwards of 135,464 IOPS read and 70,246 IOPS write in throughput performances, as well as and average latencies as low as 1.89ms read and 3.64ms write (both leveraging an iSCSI configuration using HK3R2 SSDs). In our 8K sequential benchmark we saw throughput performance of 158,454 IOPS read with the Seagate NAS HDD (CIFS) while the HK3R2 SSD gave us a write performance of 135,251 IOPS (iSCSI). Our HDD 128k large block sequential performance boasted impressive speeds, all of which all surpassed the 2.2GB/s as far as writes go, although SSD performance didn’t come out as well. The QNAP populated with SSDs offers strong mixed workload performance, but CIFS read measurements dropped far below its HDD counterpart, measuring 230MB/s.

Pros

  • Good scalability
  • SATA support offers lower cost drives
  • Easy to configure and deploy

Cons

  • Application performance wasn’t as strong as synthetic workloads
  • Poor SSD iSCSI write performance in synthetic benchmarks

Bottom Line

The QNAP TVS-1271U-RP is a flexible SMB NAS that can cost-effectively scale as business needs grow.

ادامه مطلب

دوره آموزشی HP APS Server

دوره آموزشی HP APS Server

نام دوره: Servicing HP Proliant APS ML/DL/SL Servers

مدت دوره: 40 ساعت 

HP-Servicing-Proliant-ML-DL-SL-Servers-دوره-آموزشی-سرور

سیلابس دوره آموزشی

Module 1: Configuring ProLiant Server and Array Hardware
Objectives
Firmware upgrade process
Configuring HP ProLiant server
DDR3 memory specifications and configuration rules
Configuring storage subsystem
Learning check
Module 2: Installing Server Support Software
Objectives
HP Insight Control management suites
SmartStart
ProLiant Support Pack
Validating and testing the solution
Setup differences for 100-series ProLiant servers
Learning check
Module 3: Use and Maintain Integrated Lights-Out Products
Objectives
HP Lights-Out technology and benefits
Functions of iLO3 Management processor
Functions of iLO2 Management processor
Learning check
Module 4: Storage Solutions for ProLiant Servers
Objectives
Storage technologies
HP disk drives
HP ProLiant Array controllers
Storage solutions
Learning check
Module 5: Data Availability and Protection for a ProLiant Server
Objectives
Increasing availability through power protection
Rack options
Memory protection technologies
Disk Backup System
Learning check
Module 6: Troubleshooting ProLiant Motherboards
Objectives
Server Boot process
Boot process failure indicators
Troubleshooting reboot problems
Learning check

 ثبت نام دوره آموزشی امنیت CEH

ادامه مطلب

بوت کردن سرور HP با فلش حاوی HP Smartstart

با توجه به اینکه درخواستهای بسیار زیادی از دوستان برای قرار دادن این پست شده بود تیم فنی و مهندسی وی سنتر تصمیم به قرار دادن این پست گرفت.

امیدواریم از این پست لذت ببرید.

Had long wanted to put this post, simplon thing as above, but many times we find (by saving issues) HP servers without CD, so before installing the operating system need to update the firmware on all components of the server with the HP Firmware Maintenance CD and later with the HP SmartStart CD prepare boot, install utilities and drivers from HP on the machine. In the case of not having a CD server can with an HP utility dump them to an external USB or flash drive.

The tool is called HP USB Key Utility, you can download from the official website HP. After downloading can install or remove it to run directly.

With this tool you can transfer images from CD-ROM or DVD-ROM drive to USB (pendrive).

We note that the minimum version is the 7.50 for both HP SmartStart CD cómo HP Firmware Maintenance CD.

Simply select CD / DVD USB drive origin and destination indicate!

From version 1.5 HP USB tool and can combine Smart Start and Maintenance firmaware and even several smart start x86 and x64 in one pendrive. Besides that must be the latest versions of Smart Start and Firmware Maintenance for these combinations.

 

ادامه مطلب

نصب و راه اندازی HP BladeSystem

In this paper we see a system of servers already quite common in any environment more or less nice, not go into the issue of whether it is better for some environments or other, or if it is more convenient, simple, spend less… see an environment based on HP blades, this is a HP BladeSystem, any particular model and see all the settings that can be done from your OA, HP Onboard Administrator, will be the management console all the chassis, the ‘irons’, from this console can manage any component / element, and view its status at all times.

So by way of introduction, this is what would be a blade system, that is a drawer where we put depending on the model 8 blades, 16… all in itself would be redundancy and compact, since it does not occupy the same 16 servers in a rack format 16 blades. We have so many power supplies as we need to feed the chassis, not individually, if not comprehensively, and fans to cool the environment. Could set by the sensors depending on the temperature more or less operating at speed, and power sources and depends on our environment we ride up the blades could be turned off if not needed (for example a VMware), with all that we can save a lot $$$ into electricity,air conditioners, physical space / racks,… I can see every blade (server) is in a bay, each server operating system would, totally independent of other blades (or not), we have a small display to view the status of all the chassis and to set some parameters. In the back we have the switches, since such servers prevent wiring, and wired ethernet are internal connections, and fiber, all switches are duplicated to prevent falls and have high redundancy. Besides having one or two chassis management devices, that is where we administer the system, from the so-called HP Onboard Administrator. Each HP BladeSystem es different, as well as between different manufacturers (an IBM BladeCenter example), but still the same ‘philosophy’ and are ‘almost’ set equal.

The question is first of all install the chassis (Irons) and then we start setting, for how to mount the bars there everyone who read the official doc if you do not know 😉 since in this document we perform configuration. After connecting the HP Onboard Administrator to switch him or us, we can connect to the default IP with the default username and password (admin / password). By this we understand that the first thing to change would be the IP for which we are interested and admin password. Another way would be from the Insight Display or display that has the chassis front panel, from there we can make basic changes such.

In “Rack overview” A brief summary of our chassis, of the items we have in him, a front and a rear. Come the name of our enclosure, and serial number and part number.

In “Enclosure information” shows the condition of the components, if we aluna warning or everything is OK.

In “Enclosure information, flange “Information” we can change the name of our chassis / BladeSytem / enclosure, or name that owns CPD. Besides the serial ebseñarnos, also indicates a support for chassis connections, UID LED on the identification, the connection port between the chassis rest of our CPD (we make the connection with the BladeSystem continue below) llamado Enclosure link downlink port, Enclosure and also have the uplink port that will link to connect to the upper chassis or to connect a computer if necessary.

In “Enclosure information”, flange “Virtual Buttons” can turn on or off LED Light UID, to indicate any administrator what unit must do their homework.

In “Enclosure information” > “Enclosure Settings”. is a summary of BladeSystem devices and see if we need one to connect / enable and firmware all, we must always consider that we can have the latest firmware and all common elements have the same firmware!

In “Enclosure information” > “Enclosure Settings” > “AlertMail” for that, as the name suggests, to activate email alerts of our chassis.

In “Enclosure information” > “Enclosure Settings” > “Device Power Sequence”, flange “Device Bays”, can enable the lighting of the blades in the chassis with a priority order.

In “Enclosure information” > “Enclosure Settings” > “Device Power Sequence”, flange “Interconnect Bays”, can enable the bays on the chassis connection (switches) with an order of priority.

In “Enclosure information” > “Enclosure Settings” > “Date and Time” to configure the time service of the chassis, be a manual or schedule an NTP time server or.

In “Enclosure information” > “Enclosure Settings” > “Enclosure TCP/IP Settings” is where you can configure the name, IP, netmask, gateway and DNS servers to the chassis, the Onboard Administrator.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Protocols”, are the connection protocols that will qualify to enter the chassis. We have web access with HTTP or HTTPS, seguro shell with SSH, Telnet y XML reply.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Trusted Hosts” if we enable only give access to the enclosure from these IP’s and not from the entire network.

In “Enclosure information” > “Enclosure Settings” > “Network Access” flange “Anonymous Data” simply if we enable chassis give you some information before it loguearnos, you can sign in to be information that interests us as Darla track 😉

In “Enclosure information” > “Enclosure Settings” > “Link Loss Failover”, if te Devices Onboard Administrator, and want to do that when the primary OA lose connection, al otro pass Onboard Administrator, we enable and indicate what time of seconds that pass without a network connection, primary OA (long as the network has secondary OA!),

In “Enclosure information” > “Enclosure Settings” > “SNMP Settings” in case we have configured our network monotorización system, to manage alerts, ads… Denial Nagios, was x put 😉

In “Enclosure information” > “Enclosure Settings” > “Enclosure Bay IP Adresses”, tab “Device Bays”, we configured the IP’s of the blades, not your operating system, if the iLO IP’s so we could later connect to each bay.

In “Enclosure information” > “Enclosure Settings” > “Enclosure Bay IP Adresses”, tab “Interconnect Bays”, we configured the IP’s of the rear chassis modules, of fiber switches, the ethernet…

In “Enclosure information” > “Enclosure Settings” > “Configuration scripts”, can import the configuration scripts to automate configuration chassis and do it faster, we import it from a file or from a URL.

In “Enclosure information” > “Enclosure Settings” > “Reset Factory Defaults”, because for that, to reset the chassis to the default settings from the factory,

In “Enclosure information” > “Enclosure Settings” > “Device Summary”, is one of the most commonly used screens to document a blade environment, is a summary of all the components that we have in our chassis, with the description, serial, part number, manufacturer, model, spare part number, firmware, hardware version… all, the blades, switches, power, coolers / fans, mezzanines of the blades, info of the chasis…

In “Enclosure information” > “Enclosure Settings” > “DVD Drive”, from here we can connect the CD / DVD of the chassis to a specific blade, case we need to get a CD / DVD on a particular blade. Man, is best done from the iLO connection… but tb is here 😛

In “Enclosure information” > “Active Onboard Administrator” tab “Status e Information”, we see the state of the chassis says, at temperatures of about, and others on our chassis.

In “Enclosure information” > “Active Onboard Administrator” tab “Virtual Buttons”, we have two buttons, one for the chassis completely Resetar, I hope this does not have to do ever, since there is no reason to reboot the entire chassis, or to turn on the LED UID information.

In “Enclosure information” > “Active Onboard Administrator” > “TCP/IP Settings”, is informational, on aand Onboard Administrator, network name information, and other IP data,

In “Enclosure information” > “Active Onboard Administrator” > “Certificate administration”, tab “Information” we have that, information on the certificate for the SSL web server.

In “Enclosure information” > “Active Onboard Administrator” > “Certificate administration”, tab “Certificate Request”, serve to generate a certificate our, using a self-signed certificate or CSR by power give it to a certificate authority for us to generate one ‘good’. And in the flange “Certificate Upload” I would climb.

In “Enclosure information” > “Active Onboard Administrator” > “Firmware Update”, could update the firmware of our chassis, if we want the, downgrade could make it, can upload it from a file of our team, from a URL or directly from a USB flash drive connected to the chassis.

In “Enclosure information” > “Active Onboard Administrator” > “System Log”, tab “System Log” We have everything that happens in our chassis, a LOG.

In “Enclosure information” > “Active Onboard Administrator” > “System Log”, tab “Log Options” LOG can redirect a server’s LOG’s, to a Kiwi Syslog type.

In “Enclosure information” > “Device Bays” we have all our blades, with their status, if the UID is on or not, bay number, state on / off, iLO IP address, and state of the DVD drive if it is connected or not.

In “Enclosure information” > “Device Bays” > BLADE > flange “Status”, have the information on our blade, if you would have some warning here would tell us what the problem, or if you have a high temperature. Certainly, if we select hardware element, we indicated in the drawing on the right on which device we, helps when we have enough elements or in the theme of the switches.

In “Enclosure information” > “Device Bays” > BLADE > flange “Information”, blade shows information in question, all of it is quite interesting, o a anotar, as the MAC or WWPN…

In “Enclosure information” > “Device Bays” > BLADE > flange “Virtual Devices”, we have different options on our blade power button, and turn it the UID.

In “Enclosure information” > “Device Bays” > BLADE > flange “Boot Options”, select the boot order of the blade, or next boot,

In “Enclosure information” > “Device Bays” > BLADE > flange “Log IML”, o Integrated Management Log, have the log’s of the blade, everything that happens will be recorded.

In “Enclosure information” > “Device Bays” > BLADE > “iLO” flange “Processor Information”, can do remote management of equipment, ideally to remotely control the computer, so we click on “Integrated Remote Console” for control of the team and to manage their devices or if we have it in Java JRE “Remote Console”.

We open in a new window control on remote computer, to do everything we need, mount remote units, connect CD / DVD the remote site, restart, shutdown…

In “Enclosure information” > “Device Bays” > BLADE > “iLO” tab “Event Log”, filtered records have a team at iLO.

In “Enclosure information” > “Device Bays” > BLADE > “Port Mapping”, tab “Graphical View” we can see the internal connections of the blade, This screen is used to understand the inner connection between the blade and the switches we have in our chassis, we can see the blade adapters (Embedded or integrated and Mezzanines) each with its internal ports, are usually additional network cards or HBA’s (fiber) with one or more ports. And each port on each adapter looks at what port or ethernet fiber switch is connected.

In “Enclosure information” > “Device Bays” > BLADE > “Port Mapping”, tab “Table View” have the same information as in the previous tab but different view 😉

In “Enclosure information” > “Interconnect Bays” shows the chassis switches, back, in my case, two swtiches Ethernet and two fiber, shows the state of them, and if you have the power UID, switch type and model, su IP address management.

In “Enclosure information” > “Interconnect Bays”tab “Status” can view the status and diagnostic switch in question, if you have any electrical issues alert, Temperature…

In “Enclosure information” > “Interconnect Bays” > Bay ethernet > tab “Information” see it, information about this device, certain to score in the correct documentation or to arrange a future of trouble,

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet , flange “Virtual Buttons” same as above, is a virtual button to turn off / reboot the device or to turn on the UID if necessary,

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Port Mapping” we can see the blades we have connected to this ethernet switch indicating which mezzanine what blade is what switch port, MAC also shows the device in question, and if a switch would display the WWNN fiber.

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Management Console”, We initiate the switch management console, and would switch level configuration, to set it up the zoninga that interest, through “HP Virtual Connect Manager”,

In “Enclosure information” > “Interconnect Bays” > BAY fiber > tab “Status”, can view the status and diagnostic fiber switch in question, if you have any electrical issues alert, Temperature…

In “Enclosure information” > “Interconnect Bays” > BAY fiber > tab “Information” see it, information about this device, certain to score in the correct documentation or to arrange a future of trouble,

In “Enclosure information” > “Interconnect Bays” > BAY fiber, flange “Virtual Buttons” is the virtual button to turn off / reboot the device or to turn on the UID if necessary,

In “Enclosure information” > “Interconnect Bays” > BAY fiber > “Port Mapping” we can see the blades we have connected to this ethernet switch indicating which mezzanine what blade is what switch port, also shows the connected HBA WWNN, perfect to have everything well documented and annotated for configuration issues paths.

In “Enclosure information” > “Interconnect Bays” > Bahia ethernet > “Management Console”, We initiate the switch management console fiber, and would switch level configuration, to set it up the zoninga that interest, we have a tutorial here on how to configure a switch fiber http://www.bujarra.com/?p=2752,

In “Enclosure information” > “Power and Thermal” see the status of the chassis electrical issue, and su temperature and in case of any mistakes we mark with error.

In “Enclosure information” > “Power and Thermal” > “Power Management” We can manage the configuration of the redundancy on the chassis power, so as to enable “Dynamic Power” for putting cost savings F.A. you do not need a standby state not to use them unless necessary. Note, it sounds a tall tale with this issue of saving or putting virtualization topics such chassis, saving is true, just take a calculator and multiply the servers so you spend so worth the hour kilowatt (kWh) and you can amaze with the costs in one year! all this you must add the cost of air conditioning of course… if we reverse, there is nothing better than a blade virtualization environment.

In “Enclosure information” > “Power and Thermal” > “Enclosure Power Allocation” spending shows us that we need the items currently in the chassis, our blades, switches, modules… and what would be our total capacity. One should always keep in mind that if we damaged a power supply, if we covered!

In “Enclosure information” > “Power and Thermal” > “Power Meter”, tab “Graphical View” shows a graph of our chassis consumption,

In “Enclosure information” > “Power and Thermal” > “Power Meter”, tab “Tablel View” records shows a consumption of our chassis,

In “Enclosure information” > “Power and Thermal” > “Power Subsystem” shows the status of all our power supplies, and the power mode, and if we would have redundancy.

In “Enclosure information” > “Power and Thermal” > “Power Subsystem” > “Power Supply X” will show specific information and a power source in question as its capacity / consumption, serial, part number, spare part number…

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” tab “Fan Summary” shows a generic view of the fans or coolers that have our chassis and their use.

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” tab “Fan Zones”, shows areas of our chassis ventilation and whether they are covered or not by fans to cool the area, normal is placed behind the blades fans, because it does not make much sense to place them on the other side, sometimes anyway not us who assemble the chassis if not an HP engineer, so not all our decisions 😉

In “Enclosure information” > “Power and Thermal” > “Thermal Subsystem” > “Fan X” and shows the individual status of each fan or cooler, and their use,

In “Enclosure information” > “Users / Authentication” > “Local Users” we have a local user management on the chassis, manage access to blade environment.

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Directory Settings”, to configure access instead of local users with service users through LDAP Directory,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Directory Groups” to manage groups of users from LDAP,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “SSH Administration”, manages the keys for SSH,

In “Enclosure information” > “Users / Authentication” > “Local Users” > “HP SIM Integration” > sirve para integrar el Onboard Administrator con HP Systems Insight Manager, to pass credentials.

In “Enclosure information” > “Users / Authentication” > “Local Users” > “Signed in Users” shows the currently logged users on the chassis of blades or historic logins,

In “Enclosure information” > “Insight Display”, shows the small screen that has the front of the chassis, a small display that will enable us to perform certain basic configurations with a couple of buttons, and view the status of the chassis briefly.

Well, the interesting thing is that if we set some of this is well documented, as well as for us to deliver to the customer and who come after us know what and how it is mounted, with logical drawings like this on the connections and documentation of IP’s, MAC’s, WWNN, WWPN, cabling… and perfectly in this structure would blade, more comfortable to manage and maintain! a bargain!!!

ادامه مطلب

نصب و راه اندازی HP LeftHand

This paper vera certain generic configurations that allow these cabins HP SAN HP Lefthand calls, This case is made by a virtual booths under a VMware, and for working smoothly in an environment much more flexible laboratiorio. HP has several models cabinas Lefthand físicas, all with the same system, but with different capacities, Disk models, bocas ethernet… serían la serie HP LeftHand P4500 y HP LeftHand P4300. But also for production environments has the virtual cabin HP LeftHand P4000 Virtual SAN Appliance o VSA.

In this document we will see the main features of the cabins, such as the Storage clustering (gives higher performance and capacity), Network RAID (greater availability of data), Thin provisioning (reduces costs and improves the utilization of disk capacity), iSCSI (Ethernet technology) Snapshots and replication using Remote Copy (for local or remote replication, to improve the protection and availability of data).

This is the scenario that will be mounted in the wake of this document, be mounted four cabins SAN HP Lefthand, will create a multisite cluster type, create a LUN, RAID network configure replication to see that senses going and what it means, LUN’es to present the host. You will see how to create a snapshot of the LUN to use as a backup or dump it on another LUN, to use the SmartClone… what we need are as iSCSI across a private network environment where they will be booths and legs ethernet hosts that want to connect to them, clients / positions must not be connected to this network. The network should be at least 1GB to be supported and ideally have a 10Gb iSCSI network!

Good, we started the first cabin, have little configuration, with agregarles management IP address we would be better. So we started and when we asked for a login ‘start’.

We press the Enter here the “Login”,

And we will “Network TCP/IP Settings” > “eth0”, select “Use the following IP address” and specify the IP address for this cabin. “OK”,

“OK” to save changes,

“OK”, and with this we completed the minimum set in the cabins HP Lefthand.

Now what we have to do is connect to manage booths, we will use the Management Console calling cabs “HP LeftHand Centralized Management Console” the “HP LeftHand CMC”.

Instalando HP LeftHand Centralized Management Console,

The need to install Windows on a computer that is connected to the iSCSI network, and that management is on the private network iSCSI.

Installation is a simple wizard, we follow and have the console installed to manage our cabins independently or general level by the Management Group. “Next” to start the wizard,

Accept the license agreement “I accept the terms of the license Agreement”, “Next”,

Select custom installation marking “Custom” & “Next”,

Selected “Application” to install the console, optionally “SNMP Support” SNMP to audit with booths and “Documentation” for that, documentation & help. “Next”

Select the installation path, by default: “C:Archivos de programaLefthand NetworksUI”, “Next”,

If we icon…

If we want to open the console to finish & Miscellaneous,

A summary of the installation and click “Install” to finally start the installation of the management console,

… wait a few seconds while installing…

Ready!

Open the console “HP LeftHand Centralized Management Console”

If this is the first time we open this console, we need to configure, namely, add storage arrays, create a group the various management groups, create a cluster, or create a LUN. All this we can do by attending. So first we have to add the cabins from “1. Find Nodes Wizard”

“Next” to start the wizard,

We can search the cabins for fixed IP address, name or directly we make a broadcast on the network to look for, I choose the first option, more comfortable.

From “Add…” add the IP range where are my cabins, and frame option “Auto Discover” So the next time you open the console I automatically save these cabins on the console, press “Finish”,

Well, We detected four cabins, with their names, IP’s… click on “Close” to close the wizard.

Then hit “2. Management Groups, Clusters and Volumes Wizard” to create, first administrative group to manage booths, second, create a cluster between interests us and the type that we are interested. And finally (optional) create the LUN’s.

“Next” to start the wizard,

We create a management group to our environment from “New Management Group” & “Next”,

We indicate the group a name and select booths belonging to want this administrative group, In principle it is interesting to add all cabs to work with them,

We create a username and a password to protect this administrative group, to only authorize us or who can manage, “Next”,

Configure group time, the interesting thing will be to use an NTP server on the network that have (un firewall, a domain controller…), in my case I set it up manually from the second option, “Next”,

Well, Here we set the type of cluster that we; if only we will have a site, location, select one “Standard Cluster”, in return, if we have at least two sites, physically separated, select one “Multi-Site Cluster”. In this paper we will multisite cluster,

We give a name to the cluster and select cabins (o storage node) we want to participate in balancing the information with our organization, selected in principle all,

But now we must set (if we) sites, or physical locations, for proper administration, so click on “New…” to create sites,

In my case I will create two sites and cabins will associate a site adding them to your location. Eye, must have the same cabin somewhere in the other sites, and of course the same model.

Well, shows no cabins available and teaches us that we have configured sites, “Next”,

We add from “Add…” an IP address (Virtual IP) for the cluster, that is where we connect to our iSCSI iniciaciadores,

Indicate an IP address and a netmask, “OK”,

The wizard also asks us if we want to create a volume and, a LUN, this is optional, but I prefer to do it later as it is not that I have neither space! So frame “Skip Volume Creation” & “Finish”,

… waited a while while creating the management group, unites all canbinas, creates the cluster…

Once, click on “Close” to manage this new environment.

Here we have the Management Group, with the cluster, nodes, and all settings that could manipulate,

If we want a computer to connect to our volumes or our LUN’es, first we have to discharge the initiator of that team, for this, we will “Servers” and Right “New Server…”

This Get, allow this computer to access the LUNs I tell you later, is a type to aliases, indicate their name, description, if it reads give access via iSCSI on, if we want to use the load balancing (that do not support all initiators), if we want to use CHAP on the, and the name of the originator or Initiator Node Name. The iSCSI initiator name you can get from the host properties that will be attached to cars. “OK”,

Las cabinas HP Lefthand, Managers need to have all the data correctly replicated cabins, to communicate with the storage cabinets,or fall to withstand different nodes… Being number quorum = n; N = Number of managers + 1. Manager is a process which controls the environment, associated with a management group. It is recommended that the number of Managers is odd, namely,in the configuration I have now, I have four cabins with manager initiated in each, Debere start a manager more, and I have the option (less desirable) de usar a Virtual Manager (that run under one of the cabins) and only be lifted manually and after dropping one of the nodes, so do not miss the quorum. Or else, have the option of using a Manger Failover, that this should be a part of the booths (advisable) at a third site (or at least network separate booths), yes would rise automatically when a fall from a node. This runs on a virtual machine in VMWare ESX or is a VMware Server.

So achieve that if a node drops, or two; have 3 manager’s raised and you can continue working on the current status of our organization and quorum remains.

And would select which node we want the Virtual Manager.

Good, we will create a volume, a LUN, order subsequently to iSCSI initiators and can work with our cabins share alamcenamiento. From “Volumes and Snapshots” with right “New Volume…”

Tab “Basic”, indicate the name of the volume, a description, a size that we want to give and that is what we want to present servers from “Assign and Unassign Servers…”. We recommend creating a volume with the size needed, because then you can resize and make it smaller or larger is very simple.

Select the server you want to see this record, mark “Assigned” and select the permissions you want to have, indicated for full access “Read/Write”, “OK”,

Tab “Advanced” important to indicate the type of replication that we want this LUN between nodes or cabins that is a characteristic of this type of cabins (see the image below for more detail). In addition to setting the priority of replication between the cabins “Availability” or availability “Redundancy” or redundancy. And the type of LUN, is primary or remote

The type of replication indicates how many times the data is to be copied, namely, 2-Way indicates that the information is duplicated, 3-The info will be tripled Way and 4-Way will cuatriplicada. Now, depends on the nodes that have to follow the patterns of writing, for it is better to seek our case. As we shall see, to have higher levels of replication, more room will be reserved, more we occupy our information but we will be much more protected.

This is the aspect of the newly created LUN.

On a volume, we can create snapshots or snapshots, to stay in a fixed point and could go back at any given time, losing the information that has been written subsequently. Or directly to rely on them to back serguridad (remote), we also present these snapshots to teams if our interest. To manually create a snapshot volume right “New Snapshot…”

We indicate the name of the snapshot, default uses the terminology of two SS to indicate that it is a snapshot. “OK”.

And these would be the options we have on a snapshot, Roll back the interesting 😉

We can see the performance of the cabins, or any member of our organization since “Performance Monitor” can add or remove counters to our graph.

Or for example, from “Use Summary” we can see a graph where the actual uses of our data, data reserved (or provisioned), but these snapshots, what is not used, the total size, Thin Provisioning or would use instead of the LUN Full’d see that I have different information.

The above, snapshots can do remotely, namely, they can be put in a volume, for this, volume on right “New Remote Snapshot…”

We have to keep the points we asked, first, click on “New Snapshot…” to create the snapshot,

Same as above, first create a snapshot, we name, “OK”,

And now we must specify a remote volume displaying the combo, if we do not have, will have to create one from “New Remote Volume…”

“Next” to select the management group,

Existing cluster (“Existing Cluster” & “Add a Volume to the Existing Cluster”), “Next”,

In this cluster, that automatically discovers, “Next”,

We put a name to this volume, and if we want to have replication or not your content, “Finish”,

We confirm and “Close”,

Now, we give “OK” and we will create a snapshot of the source volume and we will mount / copy as if it were another volume.

After a while of copies and replicas of information, see that we would have two volumes with the same information,

We can use another feature of the HP Lefthand cabins to create volumes SmartClone, this is to save space in our cabins. A practical example, if we have teams (servers x example) with the same base system, create a LUN for it, install the operating system on it and configurations necessary, this will occupy (2Eg Gb), if we have 10 2Gb for servers using the same information we are wasting 20Gb. The idea is to create a volume with this setup and create SmartClone, each server to use only the information you need using the base disk; each server and store your information in your individual LUN, without having to duplicate information senseless.

Good, for this, on a volume and with the information we want to use the other teams, Right “New SmartClone Volumes…”

First we make a snapshot “New Snapshot…”

Ditto, “OK”,

We indicate how we want to call this volume basis, the type that will supply the volumes that we create, for these environments is recommended to use “Thin”, and not “Full” to occupy what is used (so we can create more volumes with size not have available, very dangerous but can, may). Selected servers (iSCSI initiators) we want to see each SmartClone and permissions (the example graph is not adequate, since I did not have more teams to make the sample). We “OK”,

And if we go back to the graph before, see (in my case) or we are going to use more space than we dispobible! (eso es Thin Provisioning). Full Provisioning reserve all the space we assign a LUN, although not used.

Well, we can get through all the tabs you need console can also get information from our hardware, the status of their connections…

And if we drop a cab and have raised the Virtual Manager (o el Fault Manager) we will see how we can support simultaneous fall to two nodes at once.

To connect to this storage, if possible we will use the iSCSI initiator that provides HP Lefthand, since it allows a better balancing of the connections, although only for Windows, HP MPIO Lefthand DSM Dynamic Load Balancing, do dynamic balancing of obtaining data from different connections cabins. However with a traditional iSCSI initiator such as Microsoft iSCSI Initiator, redirected by asking the host cluster and bringing more liberated (Virtual IP Static Load Balancing).

ادامه مطلب

محصولات HP

سیستم های یکپارچه شرکت HP

سیستم های یکپارچه شرکت HP شامل موارد کلی ذیل می باشند:

  • Storage
  • Networking
  • Software
  • Server

 

HP Servers سرورهای

Server

 

HP Storages استوریج های

Storage

 

HP Networking تجهیزات

Networking

 

HP Softwares Solutions نرم افزار ها و راهکارها

Software

  • Manage networks, apps, analytics and clouds.
ادامه مطلب

مدیریت سرورهای مجازی ESX با نرم افزار VMware vCenter Server v5.5.0

مدیریت سرورهای مجازی ESX با نرم افزار VMware vCenter Server v5.5.0 Build 1945270 Update1c x64

VMware vCenter Server ساخت شرکت VMware بوده که وظیفه اصلی آن مدیریت سرورهای ESX و سرورهای مجازی بر پایه آن است؛ به زبان ساده این برنامه به شما این قابلیت را می‌دهد تا بتوانید چندین سرور مجازی را از سرورهای مختلف ESX در کنسول یک برنامه، کاملاً حرفه‌ای مدیریت کنید. بسیاری از قابلیت‌های برنامه‌ی vSphere که کاربرد وسیعی در مجازی سازی دارد برای فعال شدن نیازمند vCenter هستند، قابلیت‌های نظیر VMotion و Fault Tolerance و غیره.

نرم افزار VMware vSphere برترین برنامه مجازی سازی جهان می باشد ، این برنامه قابلیت های گوناگونی را از طریق ابزارهای مختلف خود در اختیار کاربران می گذارد . این برنامه در حقیقت یک سیستم عامل بسیار بسیار پیشرفته است که با آن می توانید به ساخت دیتاسنتر و انواع Cloud های خصوصی و عمومی بپردازید .
قابلیت های حیرت آور در زمینه شبکه سازی و امنیت ، تطابق با دیگر نرم افزارهای مجازی سازی همچون Hyper-V ، کاهش چشمگیر هزینه ها ، قابلیت در دسترس بودن همیشگی سرویس و ده ها امکان ، ابزار و قابلیت دیگر این برنامه را به غول مجازی سازی جهان تبدیل کرده است .
نسخه ۵٫۵ نرم افزار vSphere   پشتیبانی بسیار خوبی از ویندوز ۸ و Windows Server 2012 و همینطور سیستم عامل های نوین لینوکسی را ارائه نموده .

به طور کلی برای مجازی سازی از چند نرم افزار استفاده می شود که معروفترین آنها عبارتند از VMware و Xen و Hyper-V و Virtuozzo و … .
لیست کامل این پلتفرم ها و ویژگیهای آنها را در این‌جا ببینید.
به نظر می رسد در میان همه این نرم افزارها VMware توانسته است امکانات بیشتر و ویژگیهای برتری را نسبت به بقیه ارائه دهد. این نرم افزار در مقایسه خود با Xen و Hyper-V این جدول را ارائه می کند.

قابلیت‌های کلیدی نرم افزار VMware vCenter Server:

– اتوماسیون بسیار پیشرفته
– ساخت سریع ماشین های مجازی
– عدم نیاز و وابستگی به سیستم عامل
– امکان ارائه کنترل پنل برای هر ماشین مجازی
– راه اندازی و مدیریت همزمان چندین سرور ESX
– مدیریت کامل سرورهای مجازی بر پایه ESX و ESXi
– پشتیبانی از نهایت ۳۰٫۰۰۰ سرور مجازی
– قابلیت Linked Mode و اتصال به vSphere
– مدیریت منابع و بهینه سازی سرورهای مجازی
– قابلیت تخصیص Dynamic منابع
– قابلیت تنظیم Roles و Permission های سفارشی
– کاملاً سازگار به سایر نرم افزارهای VMware مانند vSphere و Workstation
– امکان تنظیم هسته های CPU برای هر ماشین مجازی
– کارکرد مستقل ماشینهای مجازی بدون تاثیر گذاری بر یکدیگر
– امکان کنترل ماشینهای مجازی به صورت Web Base
– امکان تغییرات سریع و داینامیک در هر ماشین
– امکان تغییر در اندازه هارد هر ماشین بدون از دست رفتن اطلاعات
– و ….

طیف فعالیت‌های vCenter به سه دسته کلی تقسیم می‌شوند:

۱٫ دید پذیری (Visibility): برنامه vCenter این قابلیت را دارد که حین تنظیم و راه اندازی سرور ESX و Virtual بتوانید راندمان و کارایی هر سرور را مشاهده کرده و بتوانید توسط Roles ها و Permission ها آن‌ها را مدیریت کنید.
۲٫ مقیاس پذیری (Scalability): موارد گفته شده در Visibility قابل بسط دادن بوده و از ویژگی‌های مقیاسی vCenter است به طوری که شما می‌توانید توسط Linked Mode چندین سرور vCenter را در یک کلاینت vSphere نیز کنترل کنید.
۳٫ خودکار سازی (Automation): برنامه vCenter قابلیتی دارد که می‌توان با آن برای هر هشدار و پیغام خطا یک سری دستور العمل تعریف کرد که بعد از هر خطا، آن‌ها را به ترتیب اجرا کند؛ با این کار بسیاری از دستورات خودکار گردیده و روند کار بسیار سریع‌تر خواهد شد.

مشخصات

شرکت سازنده: VMware, Inc
قیمت: ۶۰۴۴ دلار آمریکا (صرفاً جهت اطلاع)
حجم فایل: ۳۲۶۷ مگابایت
تاریخ انتشار: ۱۶:۳۰ – ۹۳/۵/۱۱
منبع: لردلی|پادشاه ترفندها

راهنمای نصب نرم افزار:

نکات بسیار مهم:
– این نسخه فقط مخصوص Server است، اگر نسخه Client برنامه vCenter را می‌خواهید استفاده کنید باید برنامه vSphere را دانلود کنید.
– این برنامه فقط بر روی سیستم عامل‌های ۶۴ بیتی نصب می‌شود.
– قبل از نصب برنامه باید حتماً حداقل SQL Server 2008 R2 را نصب کرده باشید.
– به سیستم مورد نیاز حتماً توجه کنید.
– فایل تا حد امکان فشرده شده است.

راهنمای نصب:
۱٫ ابتدا برنامه را دانلود و از حالت فشرده خارج کنید.
۲٫ فایل autorun.exe را اجرا کرده و مراحل نصب را ادامه دهید.
۳٫ از یکی از Serial های زیر برای فعال سازی برنامه استفاده کنید:
۵J20M-6DHEJ-K884C-012RH-CERM5
۱۴۰۲H-6DLEH-K804A-010UH-3H3Q5
۰۰۲۲۴-۴DL94-58U4C-023A6-9H3L1
HJ22J-0ELE3-68643-0C0U0-AE934

محتویان پکیج نسخه ۵٫۵ :

VMware ESXi 5.5
VMware vSphere Client 5.5
VMware vCenter Server 5.5

نکات:

۱- این نسخه دارای Keygen برای اجزای ذکر شده می باشد.
۲- در صورت نیاز به اجزای دیگر از طریق فرم ارتباط با ما / نرم افزار درخواست دهید تا اضافه شود.
۳- این نسخه برای نصب روی یک سرور نیاز به هیچ سیستم عاملی ندارد.

دانلود

سایر نرم افزارها+مستندات آموزشی اعم از کتاب الکترونیکی PDF و فیلم های آموزشی به مرور اضافه خواهد شد …

در صورت نیاز، لردلی آمادگی خود را برای ارائه مشاوره در خصوص راهکارهای مجازی سازی جهت خدمت رسانی به تمامی سازمان ها و شرکت ها اعلام می نماید.

1_tavajoh برای دوستانی که امکان دانلود ندارند این مجموعه نرم افزاری بر روی ۱ حلقه DVD با قیمت مناسب عرضه خواهد شد. در صورت تمایل موضوع را از قسمت نظرات مطرح نمایید.

نرم افزار VMware vSphere 5.5

آشنایی با برنامه ها و سرویس های مجازی سازی موجود در vSphere 5.5:

vShield Manager: این برنامه فایروال مجازی ( vFirewall ) می باشد و قابلیت های یک برنامه فایروال مانند : رصد ترافیک شبکه و ایمن سازی ماشین های مجازی موجود در آن را فراهم می کند .
vSphere App HA: پلاگینی برای vSphere Web Client است که قابلیت های High Availability برای نرم افزارهای موجود در فضای مجازی سازی شده با vSphere را فراهم می کند.
vSphere Big Data Extensions: این امکان قابلیت توسعه سریع کلاسترهای Hadoop در فضای مجازی سازی شده با VMware vSphere را فراهم می کند.
VMware vSphere Data Protection: امکانات ، ابزارها و قابلیت های مختلف پشتیبان گیری و بازگردانی ( شامل شیوه های مختلف پشتیبان گیری و بازگردانی و امکانات بسیار بسیار منعطف )
vSphere Replication: با کمک این برنامه می توان به Replicate ماشین های مجازی در حال کار و روشن پرداخت . همچنین نسخه جدید این برنامه کاهش چشمگیری در مصرف پهنای باند ، کاهش شدید درگیر کردن منابع ذخیره سازی و بسیاری قابلیت های Disaster Recovery را در خود گنجانده است .
Automation Tool: به کمک این نرم افزار می توان به Deploy و تهیه و تدارک سرویسهای Cloud خصوصی و عمومی ، زیر ساخت های فیزیکی ، برنامه های مجازی سازی و ارائه کنندگان Cloud پرداخت .
vCenter Operations Manager: این برنامه قابلیت مدیریت یکپارچه کارایی ، Capacity ، پیکربندی و . . . را در زیر ساخت Cloud ارائه می دهد .
vCenter Server: به کمک این نرم افزار می توان می توان به مدیریت مرکزی تمامی سرورها ، هاست ها و ماشین های مجازی پرداخت .
vSphere Hypervisor: این برنامه هسته اصلی مجازی سازی در vSphere را تشکیل می دهد.
VMware Tools for Linux Guests: همانطور که از نام این موضوع بر می آید شامل ابزارهای لازم برای سیستم های Guest مبتنی بر لینوکس است.
vSphere Client: این برنامه امکان اتصال و مدیریت vSphere از طریق دستگاه ها و پلتفرم های مختلف را فراهم می آورد .

ادامه مطلب

استوریج EMC

خدمات زيرساخت ذخيره‌سازی اطلاعات مبتنی بر EMC

خدمات-زیرساخت-استوریج-EMC-آموزش

از آنجا که داده‌ها رکن اصلی صنعت فناوری اطلاعات را تشکیل می‌دهند، مهمترین وظیفه مدیران مراکزداده هر سازمان حفظ، نگهداری و مدیریت اطلاعاتی است که بطور مداوم بر میزان و تنوع آن افزوده می‌شود. از چالش های اصلی مدیریت شبکه ارتباطی و ذخیره‌سازی داده می‌توان به حفظ امنیت داده‌ها، کارایی و ظرفیت انباره، نحوه نسخه‌برداری پشتیبان‌(Backup) و ابزار مدیریتی مناسب اشاره کرد. روشهای سنتی نگهداری داده‌ها بصورت مجزا در سرویس‌دهنده‌های مختلف دیگر جوابگوی روند رو به رشد داده‌های سازمان نیست. در معماری نوین مراکزداده برای سهولت مدیریت و حفظ امنیت، با طراحی زیرساخت مناسب، داده‌ها را در تجهیزات تخصصی ذخیره‌سازی نگهداری می‌نمایند و ضمنا امکان نسخه برداری پشتیبان را بصورت خودکار در مکان فیزیکی دیگری فراهم می‌سازند.

شرکت فناوری اطلاعات ایرانیان برای راه‌اندازی زیرساخت ذخیره‌سازی از تجهیزات ذخیره‌سازی کمپانی EMC (پرفروش‌ترین برند ذخیره‌سازی دنیا در سال اخیر) و برای سامانه نسخه‌برداری پشتیبان‌ از تجهیزات کمپانی Quantum استفاده می‌نماید.

شرکت EMC فعالیت خود را بصورت تخصصی بر روی سیستم‌ها و راهکارهای ذخیره‌سازی و مدیریت اطلاعات متمرکز کرده و برای سازمان‌ها در رده‌های مختلف (کوچک، متوسط، بزرگ) طیف گسترده‌ای از راهکارها و محصولات را ارائه می‌کند. اهم فعالیت این شرکت ارائه سخت‌‌افزارهای حرفه‌ای در زمینه ذخیره‌سازی اطلاعات و شبکه‌های داده و همچنین نرم‌افزارهای مرتبط است.

این شرکت مالک بسیاری از کمپانی‌های بزرگ و نام آشنا نظیر VMware و RSA بوده، تجهیزات ارائه شده توسط EMC کاملا سازگار و منطبق بر راهکارهای آنها می‌باشد. شرکت فناوری اطلاعات ایرانیان با توجه به اهمیت این مقوله از بین ابر شرکتهای ارائه دهنده تجهیزات ذخیره‌سازی، اقدام به دریافت و ارائه خدمات حرفه‌ای از شرکت EMC کرده است.

تا قبل از سال 2011 محصولات ذخیره‌سازی اطلاعات کمپانی EMC در رده Midrange شامل خانواده CLARiiON بعنوان SAN و Celerra بعنوان NAS بوده است. پس از تولید شش نسل، این کمپانی نسل بعدی محصولات خود را تحت نام VNX و بعنوانUnified Storage ارائه کرده است که همزمان از پروتکلهای Block و File پشتیبانی می‌کند. خانواده VNX شامل پنج مدل مختلف VNX5100، VNX5300، VNX5500، VNX5700 و VNX7500 بوده است که از سال 2013 محصولات VNX5200، VNX5400، VNX5600، VNX5800، VNX7600 و VNX8000 به آنها اضافه شده است. تفاوت عمده این محصولات در نوع پردازنده و مقدار حافظه، تعداد دیسک‌ها و LUNها، نوع و پورت‌های I/O می‌باشد. افزون بر پورت FC امکان قراردادن ماژول پورتهای ارتباطی دیگر نظیر iSCSI 1Gb/s، iSCSI 10Gb/s و FCoE 10Gb/s وجود دارد که علاوه بر کاهش هزینه‌ها، به سازمان این امکان را می‌دهد که بهترین ارتباط را نسبت به نوع برنامه کاربردی و نیازهای آن انتخاب نماید.

با توجه به اینکه کمپانی VMware توسط EMC خریداری شده‌ است، VNX حداکثر کارآرایی و سازگاری با محصولات مجازی را داشته، گزینه‌ای ایده‌آل برای پیاده‌سازی راهکارهای مجازی‌سازی VMware است.

این شرکت با درک اهمیت فوق‌العاده ذخیره‌سازی داده‌های هر سازمان، طراحی و اجرای زیرساختهای نوین ذخیره‌سازی داده بر مبنای ‌‌EMC را جزو خدمات قابل ارائه خود قرار داده است. مجموعه خدمات شرکت در این حوزه عبارت است از:

▪ مطالعه اولیه در خصوص حجم داده‌های سازمان و میزان حساسیت آنها
▪ طراحی زیرساخت ذخیره‌سازی مناسب با توجه به بررسی انجام شده
▪ تامین تجهیزات ذخیره‌سازی و نسخه‌برداری پشتیبان (Backup) ‌بر اساس EMC
▪ نصب و راه اندازی زیرساخت ذخیره‌سازی و تجهیزات مربوطه
▪ انتقال داده‌های سازمان به تجهیزات جدید
▪ نگهداری و پشتیبانی از زیرساخت ذخیره‌سازی EMC
▪ آموزش EMC

ادامه مطلب

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9

امروز میخوام طریقه نصب و پیکره بندی vmware ESXi 5.5 رو بر روی سرور های HP G9 براتون توضیح بدم .

ابتدا آخرین Update مربوط به نسخه 5.5 که U3 می باشد را از لینک گذاشته شده دانلود کنید.

VMware ESXi بر عکس محصول آشنای VMware workstation یا VMware server بدون نیاز به یک سیستم عامل دیگر، بر روی سخت افزار نصب و با آن ارتباط مستقیم داره پس خودش یک سیستم عامل هست. در واقع به این نوع از نرم افزار های مجازی سازی به اصطلاح Hypervisor Type 1 میگن یعنی مستقیما با سخت افزار ارتباط برقرار میکنه ، برای تست میتونید روی VMware workstation هم نصب کنید. قبل از نصب VMware ESXi باید سخت افزار مورد نیاز رو داشت که VMware ESXi با CPUهای 64 بیتی کار میکنه حالا CPU Intel که باید قابلیت Intel VT را پشتیبانی کنه یا CPU AMD که باید قابلیت AMD-V رو داشته باشه. کمترین رمی که برای نصب ESXi لازم هست 2 گیگ که این فقط لازمه ی نصب VMware ESXi و برای نصب virtual machine روی ESXi باید رم بیشتر داشت حالا نسبت به تعداد و کاری که از این سیستم عامل‌ها‌ی میخواهید. حداقل یک کارت شبکه نیاز دارید که پیشنهاد میکنم بیشتر از یکی داشته باشید. برای اینکه بدونید چه سخت افزار‌هایی با VMware ESXi همخونی دارند به اینجا کلیک کنید.

کسانی که با این محصول آشنای قبلی دارند (در همین سایت توضیحی داده شده بود) میدونند که قبل از سرور ESXi سرور ESX وجود داشته ولی به دلیل حجم بالای update و امنیت پایین آن و به خاطر این که سرویس کنسولی بر پایه redhat داشته که باید از هر دو بستر نرم افزاری پشتیبانی میکرد. این محصول از نسخه چهار دیگه تولید نشد و ESXi وارد بازار شد که از امنیت بالا و حجم update پایینتر و حجم اولیه کمتری برخوردار است، این محصول حتی سبک تر و سریع تر از ESX بوده ونصب میشود.میتونید نسخه 60 روزه ESXi رو به صورت فایل ISO از سایت VMware دانلود کنید تا با هم قدم به قدم جلو بریم. برای نصب VMware ESXi اگر امکانات خوبی دارید میتونید روی یک PC شروع به نصب کنید اگر هم نه، مثل من از VMware workstation استفاده کنید طریق نصب یک سیستم عامل مجازی هم در همین سایت توضیح داده شده ، بعد از درست کردن یک virtual machine برای نصب VMware ESXi به روشی که در بالا اشاره شد مراحل نصب آن را ادامه میدهیم ، در ابتدا باید مسیر boot شدن را انتخاب کنیم که برای نصب ESXi مشخص شده است. فقط باید به این نکته دقت کنید که نسخه VMware Workstation ای که استفاده می کنید بایستی بالاتر از 6 باشه ، در نسخه های قبلی نصب کردن ESXi به سادگی امکان نداشت و بایستی فایل vmx که تنظیمات ماشین مجازی هست رو دستکاری می کردیم که بتونیم این VM رو نصب کنیم اما در نسخه های بعد از 7 و فکر کنم البته خود 7 ، قابلیت نصب مستقیم ESXI بر روی VMware Workstation بصورت مستقیم ایجاد شده و در واقع در انتخاب سیستم عامل برای نصب یک گزینه به نام ESXi اضافه شده که کار ما رو بسیار راحت کرده ، برای نصب ترجیحا با این گزینه VM خودتون رو ایجاد کنید.

بعداز اینکه گزینه نصب VMware ESXi انتخاب کردید با مراحل زیر روبرو میشوید

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 1

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 8

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 7

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 6

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 5

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 4

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 3

نصب vmware ESXi 5.5 بر روی سرور های HP DL380 G9 2

در این قسمت شروع به نصب میکنیم، بعد از enter کردن و تایید موافقت لایسنس کاربران، ESXi کلیه سخت افزار را برای هماهنگی با نصب چک میکند.

در مرحله بعد هادر دیسک موجود را نمایش میدهد که اگر چند هارد دیسک داشته باشید آنها را لیست کرده و اگر هارد دیسک با دسترسی remote مثل یک SAN داشته باشید در این قسمت نمایش داده میشود

با زدن کلید F1 میتوانید جزیات فنی آن را مشاهده کنید و اگر فایلی با پسوند VMFS (پسوند فایل های ESXi) داشته باشد نشان داده که در صورت نیاز به اشتباه بر روی هاردی که به فایل های آن نیاز دارید سرور را نصب نکنید.

انتخاب زبان کیبورد

انتخاب پسورد ریشه (root password) در نسخه های قدیمی اول سرور نصب میشد بعد پسورد عوض میشد، توجه داشته باشید این یک محیط text mode است و با کلیدهای arrow key باید کار کرد.

بعد از انتخاب پسورد و قبل نصب پیغامی مبنی بر اینکه این هارد درایو که انتخاب شده قرار است re-partition شود، میدهد تا از صحت انتخاب هارد درایو مطمئن شوید. با زدن کلید F11 مرحله نصب شروع میشود.

بعد از اتمام نصب اعلام میشه که این نسخ 60 روزه است و بعد از این 60 روز باید لایسنس مورد نظر را خریداری کرد. با زدن کلید enter سیستم reboot میشود.

بعد از boot شدن مجدد، وارد کنسول اصلی ESXi میشویم

تا اینجا موفق شدیم فقط سرور ESXi را نصب کنیم، تنظیمات سرور و همچنین شیوه اتصال به کنسول نرم افزار را در مقاله ی بعدی شرح میدهم.

ادامه مطلب