نحوه نصب Cisco Nexus 1000V بر روی vSphere 5
How To Install The Cisco Nexus 1000V on vSphere 5
Installing the Cisco Nexus 1000V distributed virtual switch is not that difficult, once you have learned some new concepts. Before I jump straight into installing the Nexus 1000V, lets run through the vSphere networking options and some of the reasons you’d want to implement the Nexus 1000V.
vSS (vSphere Standard Switch)
Often referred to as vSwitch0, the standard vSwitch is the default virtual switch vSphere offers you, and provides essential networking features for the virtualisation of your environment. Some of these features include 802.1Q VLAN tagging, egress traffic shaping, basic security, and NIC teaming. However, the vSS or standard vSwitch, is an individual virtual switch for each ESX/ESXi host and needs to be configured as individual switches. Most large environments rule this out as they need to maintain a consistent configuration across all of their ESX/ESXi hosts. Of course, VMware Host Profiles go some way to achieving this but it’s still lacking in what features in distributed switches.
vDS (vSphere Distributed Switch)
So the vDS, also known as DVS (Distributed Virtual Switch) provides a single virtual switch that spans all of your hosts in the cluster, which makes configuration of multiple hosts in the virtual datacenter far easier to manage. Some of the features available with the vDS includes 802.1q VLAN tagging as before, but also ingress/egress traffic shaping, PVLANs (Private VLANs), and network vMotion. The key with using a distributed virtual switch is that you only have to manage a single switch.
Cisco Nexus 1000V
In terms of features and manageability, the Nexus 1000V is over and above the vDS as it’s going to be so familiar to those with existing Cisco skills, in addition to a heap of features that the vDS can’t offer. For example, QoS tagging, LACP, and ACLs (Access Control Lists). Recently I have come across two Cisco UCS implementations which require the Nexus 1000V to support PVLANs in their particular configuration (due to the Fabric Interconnects using End-Host Mode). There are many reasons one would choose to implement the Cisco Nexus 1000V, lets call it N1KV for short
Without further delay, grab a coffee and we’ll get the N1KV installed!
Components of the Cisco Nexus 1000V on VMware vSphere
There are two main components of the Cisco Nexus 1000V distributed virtual switch; the VSM (Virtual Supervisor Module) and the VEM (Virtual Ethernet Module). If you are familiar with Cisco products and have worked with physical Cisco switches, then you will already know what the supervisor module and ethernet modules are. In essence, a distributed virtual switch, whether we are talking about the vSphere (vDS) or N1KV have a common architecture. That is the control and data plane, which is what makes it ‘distributed’ in the first place. By separating the control plane (VSM), and the data plane (VEM), a distributed switch architecture is possible as illustrated in the diagram here (left).
Another similarity that is the use of port groups. You should be familiar with port groups as they are present on both the VMware vSS and vDS. In Cisco terms, we’re talking about ‘port profiles’, and they are configured with the relevant VLANs, QoS, ACLs, etc. Port profiles are presented to vSphere as a port group.
Installing the Cisco Nexus 1000V
What you need:
- Unless you already have a licensed copy of the Cisco Nexus 1000V, then you can download the evaluation here: http://www.cisco.com/go/1000veval
- Note: you will need to register for a Cisco account in order to download the evaluation.
- vSphere environment with vCenter.
- Note: I’m using my vSphere 5 lab for this exercise but vSphere 4.1 will do fine.
- At least one ESX/ESXi host, preferably two or more!
- If you are using a lab environment and don’t have the physical hardware available then create a virtual ESXi server (this post by VCritical details how to do this).
- You’ll also need to create the following VLANs:
- Control
- Management
- Packet
Note: If you are doing this in a lab environment then you can place all of the VLANs into a single VM network, but in production make sure you have separate VLANs for these.
In the latest release of the Nexus 1000V the Java based installer, which we will come on to in a moment, now deploys the VSM (or two VSMs in HA mode) to vCenter and a GUI install wizard guides you through the steps. This has made deployment of the N1KV even easier than before.
Once you have downloaded the Nexus 1000V from the Cisco website, continue on to the installation steps.
Installation Steps:
1. Extract the .zip file you downloaded from Cisco, and navigate to VSM\Installer_App\Nexus1000V-install.jar. Open this (you need Java installed) and it will launch the installation wizard. Enter the vCenter IP address, along with a username and password.
2. Select the vSphere host where the VSM resides and click Next.
3. Select the OVA (in the VSM\Install directory), system redundancy option, virtual machine name and datastore, then click Next.
Note: This step is new, previously you had to deploy the OVA first, then run this wizard. If you choose HA as the redundancy option, it will append -1 or -2 to the virtual machine name.
4. Now configure the networking by selecting your Control, Management and Packet VLANs. Click Next.
Note: In my home lab, I just created three port groups to illustrate this. Obviously in production you would typically have these VLANs defined, otherwise you can create new ones here on the Nexus 1000V.
5. Configure the VSM by entering the switch name, admin password and IP address settings.
Note: The domain ID is common between the VSMs in HA mode, but you will need a unique domain ID if running multiple N1KV switches. For example, set the domain ID to 10. The native VLAN should be set to 1 unless otherwise specified by your network administrator.
6. You can now review your configuration. If it’s all correct, click Next.
7. The installer will now start deploying your VSM (or pair if using HA) with the configuration settings you entered during the wizard.
8. Once it has deployed you’ll get an option to migrate this host and networks to the N1KV. Choose No here as we’ll do this later.
9. Finally you’ll get the installation summary, and you can close the wizard.
You’ll now see two Nexus 1000V VSM virtual machines in vCenter on your host. In a production environment you would typically have the VSMs on separate hosts for resilience. Within vCenter, if you navigate to Inventory > Networking you should now see the Nexus 1000V switch:
Installing the Cisco Nexus 1000V Virtual Ethernet Module (VEM) to ESXi 5
What we are actually doing here is installing the VEM on each of your ESX/ESXi hosts. In the real world I prefer to use VMware Update Manager (VUM) to do this, as it will automatically add the VEM to a host when it is added to the N1KV virtual switch. However, for this tutorial I will show you how to add the VEM using the command line with ESXi 5.
1. Open a web browser and open the Nexus 1000V web page, http://<IP_ADDRESS>. You will then be presented with the Cisco Nexus 1000V extension (xml file) and the VEM software. It’s the VEM we are interested in here, so download the VIB that corresponds to your ESX/ESXi build.
2. Copy the VIB file on to your ESX/ESXi host. You must place this into /var/log/vmware
as ESXi 5 expects the VIB to be present there.
Note: Use the datastore browser in vCenter to do this.
3. Log into the ESXi console either directly or using SSH (if it is enabled) and enter the following command:
# esxcli software vib install -v /var/log/vmware/cross_cisco-vem-v140-4.2.1.1.5.1.0-3.0.1.vib
You should then see the following result:
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v140-esx_4.2.1.1.5.1.0-3.0.1
VIBs Removed:
VIBs Skipped:
4. You can verify that the VEM is installed using the following commands:
# esxcli software vib list | grep cisco
cisco-vem-v140-esx 4.2.1.1.5.1.0-3.0.1 Cisco PartnerSupported 2012-04-03
# vem status -v
Package vssnet-esxmn-ga-release
Version 4.2.1.1.5.1.0-3.0.1
Build 1
Date Mon Jan 30 18:38:49 PST 2012
Number of PassThru NICs are 0
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 7 128 1500 vmnic0,vmnic1
Number of PassThru NICs are 0
VEM Agent (vemdpa) is running
Configuring the Nexus 1000V
Before we add our hosts to the Nexus 1000V we’ll need to create the port profiles, including the uplink port profile. The uplink port profile will be selected when we add our hosts to the switch, and this will typically be a trunk port containing all of the VLANs we wish to trunk to the hosts.
1. Log into the Nexus 1000V using SSH
2. Create a ethernet port profile as follows:
port-profile type ethernet VM_uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan <IDs>
no shutdown
state enabled
Adding ESX/ESXi Hosts to the Cisco Nexus 1000V
The final step is to add your host(s) to the Cisco Nexus 1000V.
1. Within vCenter, browse to Inventory > Networking and select the Cisco Nexus 1000V switch. Right click, and select ‘Add Host’.
2. Select the vmnic(s) of the host(s) you want to add and choose the VM_Uplink in the dropdown (we created this in the last step) and click Next.
Note: You’ll notice in the above screenshot that I’m adding a spare vmnic as I don’t want to lose connectivity with my standard vSwitch.
3. Migrate your port groups to the Nexus 1000V, such as the Management (vmk). Click Next.
Note: I chose not to do this, this can be done later.
4. You will then have the opportunity to migrate your virtual machines to the N1KV. This is optional and can be done later. Click Next.
5. Review the summary and click Finish.
Summary
We have just downloaded and installed the Cisco Nexus 1000V, installed the VSMs to vCenter, installed the VEM to your host and added the host to the Cisco Nexus 1000V switch. The next steps are to configure the Nexus 1000V, port profiles, etc.